1. Preamble
This is a lightning talk written for the 2014-05 APIStrat Un-Workshop event.
Top links for this presentation:
-
http://g.mamund.com/gluecon2014-talk (written text w/ related links)
-
http://g.mamund.com/gluecon2014-slides (PDF slides)
2. The Talk (5mins)
This is the prepared text for the talk.
2.1. Jet Packs and Driverless Cars (Mark Minute #0)
OK, I only have a short amount of time and we have several distinguished speakers here today so I’ll get right to the point.
I gave a talk at API Days Paris in December 2013 and one of the things I said was that "I wanted the jet pack I was promised as a small boy in the 1960s."
But that was only partially true. In fact, just a year earlier, in a talk for the WS-REST Workshop in 2012, I public coveted something I claimed was better than a jet-pack…
Google’s driverless car.
What can I say, I’m fickle.
But I mention this whole car thing because just this past week, the gCar (as I call it) was in the popular computer press again.
And one angle of the story came out that I find especially interesting.
See, the gCar "works" because it already has a map of all the roads, intersections, stop lights, and more. It does not "discover" anything. It only recognizes features of the landscape that it already knows about.
In fact, if something unexpected appears in the path of the gCar — for example, a road worker flagging down traffic — the car simply stops in the middle of the road. It doesn’t recognize this new thing so it cannot process it or make any decisions about it.
2.2. Goal-driven (Mark Minute #1)
See, the gCar depends on a standard representation of the landscape. But the gCar does not memorize the path between locations. The car still has to recognize important features and re-act accordingly. That’s why the car is able to navigate intersections, respond to traffic lights, yield to pedestrians and so forth.
-
animate appearances of the same car in lots of locations on the map
And we don’t need to build a new car for each location to which want to travel. Instead, the same car is able to make it’s way through the landscape to the next destination by recognizing the landmarks — the affordances — in the surroundings. The car is constantly offered a new representation of the landscape, interprets that representation, and acts on it in order to reach it’s goal.
This probably sounds familiar. This cycle has a name:
"The Action Lifecycle" and was first codified by Donald Norman.
In fact, we’re confronted with these kinds of goal-driven automatons every single day of our lives.
From wildlife…
…to micro-biology…
…to daily traffic on the roadways.
In all these cases, the rules are similar:
I have a goal in mind, and I have a landscape to traverse. As long as there are things I recognize, I can navigate in order to reach my final distination.
2.3. But we don’t program this way (Mark Minute #2)
Sadly, even after hundreds of years of observing this behavior in nature …
and more than 50 years of computing,
we still do not program in this way. We don’t create applications that recognize the landscape and navigate that space in order to reach a goal.
Instead we write thousands of one-off apps that memorize a single solitary path from A to B and insist on taking that same path time and time again, even if that results in bugs, errors, and repeated failure.
I must confess that it is painful for me to see API developers — especially client-side developers — build endless variations on the same broken sequential execution code that ignores the nature of message-based systems.
Especially since we learned this lesson in more than 30 years ago for graphical ingterfaces…
and use this as the default for web server code:
and we even thought we continue to use this model for gaming apps:
it’s rare to see it for web clients:
2.4. Because we don’t have the proper maps (Mark Minute #3)
And why does this problem persist?
Because we don’t have the proper maps.
Notice, I said "proper map." What’s a proper map?
One that tells you what you can expect up ahead …
One that contains enough information to safely navigate the space
And gives you clear choices along the way.
Notice that these are all examples of signs within a landscape. Because a proper map is not just a picture:
A proper map — a useful map — is an annotated representation of the landscape. One that shows important features and details that are not always availble when you’re actually "one the ground" or "in the space" itself.
Because it is the annotations, the symbols, that we use when navigating.
Without those, we might as well have no map at all.
And frankly, this "No Map Available" situation is what most client developers are still dealing with today.
They rarely have common symbols they can depend upon when navigating the landscape of the Web.
Instead they are stuck staring at un-annotated "photographs" of the landscape.
2.5. Let’s make maps! (Mark Minute #4)
So, let’s make maps! and the first thing to keep in mind here is that this is not a map:
That’s a route; a fixed set of directions to get from A to B.
What developers need is not more routes,
but more maps.
-
https://gist.github.com/mamund/9443276 (screen shot from here)
Maps don’t tell you there is only one way to get from A to B.
They allow people to understand the landscape and make their own navigational choices.
Because when developers can count on consistent maps with clear annotations…
…they can avoid having to cobble together one-off apps for each and every snowflake API out there.
And that means we can each get our own version of a driverless car for Web APIs .
Thanks.
-
same as title slide, but with http://g.mamund.com/gluecon2014-talk