Skip to content

Opinionated (RPC) APIs vs RESTful APIs

If you are not already aware, a few years ago there was some debate on the Internet as to when it was ok to call an API “RESTful”.  I won’t go into all the details, but suffice it to say that the well-respected originator of the term REST did not agree with the way his ideas were being implemented.  One outcome of this debate was that we (the loose community of API developers) now use the ugly acronym HATEOAS to refer to the style of REST which its creator envisioned, and the purists don’t freak out too much when people call their “other” APIs RESTful.

As of 2012, the scale of HTTP web services looks something like this (overlapping circles indicate shared principles):

Scale of web service architectural styles

My own API product (node-perfectapi) is biased towards the RPC side, with many of the advantageous RESTful principles built-in.  It is that way because I am a proponent of opinionated APIs, and I don’t think that the document/resource architectural style is a good fit for that.  In fact, I don’t think it is a good fit for any scenario where your data is not naturally a document.

I had a sneaking suspicion I might be missing some greater truth though – after all, the constraints on the right hand side must have associated benefits, right?  So I started investigating…

The Benefits of REST

One of the perceived benefits of doing REST is that the many intermediate layers in the Internet can handle caching.  I looked at the popular caching product Varnish, and was surprised that it is unable to automatically take advantage of even the most basic REST principles.  Out of the box, it caches the GETs that you explicitly tell it to.  Cache invalidations are configured manually.  In my research, other caching products seem similar.   The bottom line is that as long as you use idempotent GETs for querying data, you are doing just fine.  Beyond that, there is no inherent caching advantage to be gained by doing REST.

Some other benefits are more of a slam-dunk.  Content negotiation (return JSON or XML based on the request headers)  is a nice way to ensure that clients can talk in the language that is most natural for them.  Stateless servers are a well-proven boon to scalability.  But… these same principles are easy to incorporate into RPC too.

The one RESTful thing that is unnatural for RPC is the document/resource oriented paradigm of “one URI for each resource”.  There is a benefit of consistency, in that the same resource format to POST/PUT a resource is what you GET back when query the resource.  It promotes a nice warm feeling in my tummy when things are so nice and symmetrical.  Documentation is simpler because you spend time documenting the format of the resource once, instead of documenting several RPC functions, each of which may involve some or all of the same resource.  Community acceptance is also better, because REST is currently in favor.

The Downsides of REST

There are several downsides to RESTful services, like

  • figuring out PUT vs POST (for both client and server developers),
  • making use of PATCH, and generally dealing with partial documents
  • dealing with gateways and proxies that don’t support arbitrary HTTP methods
  • running out of HTTP methods on an endpoint
  • you still have to read the docs to understand when to use which HTTP methods.  Its not self-documenting (but it is more self-documenting than RPC!)

An Experiment – converting RPC to RESTful

As an experiment, I decided to try to re-design a simple RPC API in a RESTful way.  I chose node-sharedmem, which is a Node.js HTTP server that can be used as a shared memory space for processes that have that need.

The functions on the RPC-based API are as follows:

  • save(collection, key, value, [TTL]) – saves a key-value pair in a named collection, optionally set to expire in TTL milliseconds
  • get(collection, key) – retrieves a saved value from a collection (returns the value)
  • remove(collection, key) – removes a saved key-value pair from a collection
  • increment(counter) - increments a named counter and returns the new value (integer)
  • decrement(counter) – decrements a named counter and returns the new value (integer)
  • getArray(collection) – returns all key-value pairs within a named collection

This is a very opinionated API, with no concept of a document.  I can easily identify some resources though – collections, counters and variables (key-value pairs).

I designed the following REST URIs to replace the functions, and return the same results:

  • /collection/{collection}/variable/{key} – POST, same as save function
  • /collection/{collection}/variable/{key} – GET, same as get function
  • /collection/{collection}/variable/{key} – DELETE, same as remove function
  • /counter/{counter}/increment – POST, same as increment function
  • /counter/{counter}/decrement- POST, same as decrement function
  • /collection/{collection}/variables – GET, same as getArray function

(For the counter, I decided to hard-code ‘increment’ and ‘decrement’ in the URI, rather than the more risky approach of inventing new HTTP methods).

Somewhat surprisingly, this was easy to do and the result looks RESTful to me!  (But it is not HATEOAS – meets none of the unique characteristics shown in the blue circle on the diagram).

If I had to critique the solution, I would say that

  1. it exposes too few endpoints (a more RESTful solution might expose endpoints to show all the counters, or all the collections).  This limits discoverability.
  2. the GET of a variable does not return the whole resource – it excludes the TTL and just returns the value.  So we don’t have the resource-format consistency gain that we expect from REST.

Conclusions

It is easy to put a RESTful face on an RPC web service, but the facade will not bring the full benefits of consistency (of a common resource format) and discoverability (of a complete set of resource endpoints).

That said, you do get some of the RESTful benefits – consistent endpoints, and discoverability of the current functionality.  In addition it is way less work than designing a full, discoverable RESTful API, because the full API has many more endpoints than you might want to create, test and support.

Afterthoughts on HATEOAS

I think there is very little there that is of use for APIs.  It works well for web pages, but APIs require more shared knowledge (coupling) than shared knowledge of media types can provide.

The main HATEOAS tool that many people have started to include in their APIs is the link-relations.  It is useful to have these in several scenarios, e.g.

  • paging – returning a link to the next page of results makes it much easier for both the client and the API developer
  • linking to related data – for example, a document might have links to more detail, or history, etc.

Even those links are not really HATEOAS though –  to be HATEOAS, the knowledge of how to interpret the links has to be derived from the media type of the document.  For example in HTML we know how to interpret HREFs and FORMs.  In the common formats of JSON or XML we have no such standard.

 

An experiment in re-use

The next time that you have the opportunity to re-write part or all of an existing software application, consider performing this experiment.

It is a thought experiment to help you determine what are the most valuable pieces of code that you will write, and perhaps achieve some enlightenment on the nature of software. The experiment can be done in your head or with a pen & paper:


Review the existing application, and find all of the parts that you think are re-usable in your current effort. Perhaps there will be just a few, perhaps there will be none, perhaps there will be a bunch.

Consider each of the re-usable parts and answer the questions:

  • why is this re-usable?
  • what would have to change in the current effort to make this not re-usable?

Now, skip forward in time and imagine you have successfully completed re-writing the parts that you decide to re-write. It was a wonderful success, and the application has grown and expanded in wonderful ways.

Technologies have changed, it is 10 years in the future, and it is time for the next re-write. Perform the experiment again, and write down the answers that you will want to have this time.


The point of the experiment is to highlight that there is very little value intrinsically stored in source code. The only things that have long-term value are abstractions and standards.

Standards (html, css) are valuable only as long as the technology survives. They are valuable because everyone is dependent on them, and it simply costs too much to abandon them completely.

A sub-category of standards are vendor-specific technologies. Think VB6 forms, .NET forms, ASP or PHP or XAML. They still have value, but they come with the cost of a technology tie-in – they are only valuable as long as the technology lives.

Many standards are a form of debt. You accept that you will get a lower cost of development today, but that the application will only be able to be maintained and grow as long as the technology or standard is viable. It is often a very good type of debt, because there is a balloon payment at the end that you will never have to make (because the application will die before it is necessary).

Abstractions are longer-lived, because they can represent some fundamental domain knowledge. They are valuable as long as the domain and your assumptions do not change.

 

Demo Page

I uploaded a new demo page for the PerfectAPI toolset today.  You can find it at http://amigen.perfectapi.com/ or from the link on the main website. The demo is of my “amigen” API, which provides a way to generate Amazon AWS images (virtual machines that run in Amazon’s cloud).

The demo page showcases the following PerfectAPI features:perfectapi demo page

Self-configuring endpoint

Simply add a script reference to the provided javascript file in your HTML and then you can directly call the exposed methods of that API without further knowledge of urls

Simple rpc-style calls

Calls to the api are simple asynchronous calls in the rpc style (see example code on the demo page). There is no Ajax, JSON, JSONP, REST to worry about – its just simple Javascript code

Long-running API calls

Generating an image can take a while. Normally, this would timeout after 15-30 seconds, however this does not happen with PerfectAPI.

Test Page

The demo page is a custom page, but there is also an automatically generated “test” page for the API, where you can explore the API and see actual code in the language of your choice.

New perfectapi.com website

I put a new version of the perfectapi.com website up today.  Previously I was just displaying this blog at that address.

perfectapi.com home page

My business model is the common open source one, i.e. make a product that is free & open source and charge for additional services.

The page has a signup for the “public beta”, which is really just a chance for someone that is interested to get in on the ground floor with free support and services.

PerfectAPI is both the name of the business and the name of the product, which can be a little confusing.  The product is a set of tools that facilitate the development of APIs, and the meshing of different APIs together in the cloud in such a way that they form a complete service offering.

Some of my competitors are apigee.com, hook.io (by nodejitsu).  Both of these companies offer a way of building a service offering using APIs, but they have a different focus than each other and myself.

My own focus is based on a vision of interconnected, discoverable services on the Internet, done very simply.  I know what my next steps are and I’m moving forward with that.

One of those next steps is to try to drive involvement and usage in the Node.js community.  I think my Node.js offering is still a beta product, but nevertheless is very compelling for someone wanting to create a service API on the Internet.

Another piece I have to get working soon is authorization, i.e. OAUTH, OpenID, etc.

Anyway, if you have any questions, please comment on the website itself or leave a comment below.

Self-Hosting a Small WordPress Blog or Website on Amazon EC2

Last week, as part of my effort to move my domains away from godaddy.com, I decided to move this blog.  That move is complete and what you are looking at is now hosted on an Amazon EC2 “instance”.  What follows is my experience and notes on the costs of doing this.

The Costs

First off, on a 3 year hosting plan at Godaddy, I was paying about $3.20 per month, which is very very cheap.   You cannot get that price directly, but I had some coupons.  A more normal cost for Godaddy is about $5.00 per month.   My final costs for my new hosting solution on EC2 are:

  • First year: $0 per month.
  • Next 3 years: $ 6.43 per month, plus bandwidth (mostly free, unless I hit the front page of Reddit).

This is not as good as Godaddy, but it is a very acceptable rate to me (given I have complete root access to the machine and can do anything I want with it).

Below is the analysis of the EC2 costs.  For reference, here is a link to the EC2 pricing sheet.  My costs are for the us-east Amazon region – costs in other regions are different.

  • First year:  Amazon gives away one t1.micro instance plus bandwidth to new AWS customers for the first year.  This is why the first year is free.
  • Next 3 years:  $100 for 3 years of “Heavy Utilization Reserved Instance” (t1.micro), plus $0.005 per hour ($43.80 per year), do the math and that comes out to $6.43 per month.
Without the reserved instance purchase, the cost would be $14.60 per month – significantly more.   Don’t make the same mistake as I did – wait for your free year to expire before purchasing the reserved instance!   (Aaargh!  Actually, its not so bad for me because I plan on having more instances, so the free tier was never going to be enough).

Actually doing the move – the easy way

[UPDATE: I have posted a short YouTube video on how to do this]

The first step was to go into my existing WordPress blog and do a complete export of my posts and pages (Tools/Export/All Content).  With this downloaded to my local machine, I felt ready to get going.

WordPress boasts a “famous 5 minute install”, but don’t be misled by that – it can be done, but that depends highly on your Linux skills.  The PerfectAPI vision is to make things simpler, so I have for you the following, with all my experience of doing the move baked in.  Skip to “the hard way” further down in this post if you like it that way.

  • ami–e5e6328c – a us-east Amazon EC2 Ubuntu 11.10 EBS image that is set to go with what you need.  Assuming you are already signed up and have some experiencing using ssh with Linux machines on EC2, this should give you something close to the 5 minute install.

To start with the image above, first launch it into a t1.micro instance.  Be sure to use a security group that allows port 80 (http) access, and one that allows SSH access (port 22).  Once the image is running,  ssh into it using a tool like Putty.  A reasonable & quick guide to using Putty for this purpose can be found here.  Note that the login name for the instance above is “ubuntu, not “root”.

Once logged in, install WordPress by executing the command:

sudo ./install.sh

(This is my custom install script).  Installing in this way ensures your instance has its own unique passwords for mysql and wordpress (not the same as everyone else using the above image).

Once the install completes (10 seconds), go the the AWS console and set yourself up an “Elastic IP”.  (An elastic IP is just a public IP address that you can point your domain at).  Associate your new Elastic IP with your new instance.    Then, go to your blog at:

http://your-elastic-ip/

…and complete the WordPress setup.  Use Tools/Import/Wordpress to re-import all of your posts and pages.  Setup your theme, play with your widgets, install some plugins.   Do not change the url of the blog in your General Settings until you are ready to switch over.  Failing to heed this advice will make the new blog redirect to the old whenever you login.  Very annoying and difficult to change back.

Final step is to switch your domain records – use whatever tools your DNS provider has to point your domain at your elastic IP address.  After that, you can set the correct URL in the General Settings of the new blog.   It can take a while before DNS changes kick in – if you’re impatient, you can temporarily edit your hosts file to see the changes early.

Put a note on your calendar for 1 year after your Amazon AWS signup date, to purchase yourself a reserved instance, so that the lower pricing kicks in.

Doing the move – the hard way

The first trick for getting any new instance up on Amazon is to find a base AMI image that you like.  I like the Ubuntu images  at Alestic, so that is where I started.  After creating an image with an instance, you have a way to go before you can even get started on WordPress, for example you need to install a LAMP stack (apache, mysql, php), you want to ensure the instance stays up-to-date with linux security patches, etc.  Anyway, I created scripts to be able to do all of this in a repeatable way, and they can be found on my amigenerator project on Github.

If you need, here is a direct link to the scripts.  The scripts only work when run from the ami-generator tool, so looking at them on Github is mostly for education.  The ami-generator tool itself is still in alpha stage, so I am not going to include instructions on how to install/use it here.

My advice – just do it the easy way instead.  (But if you do have suggestions on how to improve the scripts, please do let me know).

 

 

Introducing… amigen node.js package

This project is still very much a work-in-progress, but it is far enough along that I have published it.

What it does is provide a framework for creating predictable, pre-installed machine images on Amazon EC2 (the Amazon cloud). My motivation for doing this is to assist those people that don’t necessarily want to get into all the dirty details of creating and maintaining linux images. They just want something that is pre-configured to their specification.

The project is hosted at github.com.  Usage requires some knowledge of node.js.

Features so far include:

  • images automatically install security updates
  • various common software can be installed

That’s all for now…

Introducing… Pokki Pomodoro Timer

For fun, and for a chance at winning $30K, I created this small Pokki app. Please download and enjoy!

Download Pomodoro Timer for Pokki

A Pomodoro timer is just a little countdown-timer that helps you manage your time by breaking up work into 25 minute chunks.


While the timer runs, it also shows in your taskbar with the number of minutes remaining…


…and when its done, it plays a little egg-timer sound to let you know its time to take a short break.

Download Pomodoro Timer for Pokki

Intro to Perfect API

As it states in the About page, the vision of Perfect API is

…to simplify the act of programming

The first step in doing this will to create an ecosystem that will enable better code reuse across projects, both open source and closed source.  To do this, we propose a change to the way we construct software, by introducing some new fundamental building blocks.

Right now, we’re working on defining and prototyping those building blocks. We’ll post more on this blog as we get nearer a beta release.

 

The difficult blue eyes logic puzzle

See also xkcd version, or on wikipedia. I found the original link on Damien Katz’s blog. Its difficult to me, because there is a widely accepted solution that I could not grasp for quite some time. The wikipedia link includes the solution. Read one of the others if you just want the problem.

Here is the puzzle, followed closely by the solution:

On an island, there are 100 people who have blue eyes, and the rest of the people have green eyes. If a person ever knows herself to have blue eyes, she must leave the island at dawn the next day. Each person can see every other persons’ eye color, there are no mirrors, and there is no discussion of eye color. At some point, an outsider comes to the island and makes the following public announcement, heard and understood by all people on the island: “at least one of you has blue eyes”. The problem: Assuming all persons on the island are truthful and completely logical, what is the eventual outcome?

The accepted solution is that all of 100 the blue eyed people leave the island after 100 days. The short, misunderstood explanation is that the outsider introduced some “common knowledge” that was not there before, which allowed all the blue-eyed people to deduce their eye color.

The proof uses induction, and goes like this. If there were only 1 blue-eyed person (n=1), then he would see that there are no other blue-eyed people, and deduce that he is the one person the outsider mentioned. We would leave the island. If there were 2 blue-eyed people (n=2), then they would both see the other and expect the other to leave on day 1. When neither leaves the island after 1 day, they will each realize that they must be the “other one” with blue eyes, and leave together on the day 2. Using induction, bla bla, 100 days later all blue-eyed people leave.

Lets look at that more closely.

The argument works for day 1. Fairly obvious. Blue eyed person sees no other blue eyes, so he knows he is the one and leaves.

The argument still works for day 2. At first it seems the 2nd blue-eyed person has no reason to assume he is the “other one”. But he knows that there is more than one (one would have left after 1 day), but he can only only see one (so he must be the other).

Consider a green-eyed person on day 2. He would also know that there is more than 1 blue-eyed person. But he can see 2 blue-eyed people, so he will do nothing. He will not know that he has green eyes – he will simply reserve his judgment until day 3.

Eventually day 100 comes (induction allows us to jump forward like that), and all blue eyed people are confronted with the inevitable truth, and they leave.

Further truths:

  • The 1st day pronouncement that someone has blue eyes appears to add no new knowledge. This is true for everything except the simplest case of a single-blue eyed person. The pronouncement is a device to assist in the induction proof. Really, they would simply leave 100 days after they got there, no outsider pronouncement necessary. (That is just harder to explain/prove).
  • In some versions of the puzzle, the person has to know their eyecolor to leave (the example above is limited to blue). In those versions, if all of them know there are only 2 eye colors, then on day 101, all green eyed people will leave too. They would leave earlier if there were > 1 of them and < 100, and then the blue eyed people would leave one day later.