The Geek Mythology  Guide to REST APIs

In the year 2000 Roy Fielding’s acclaimed dissertation introduced the Representational State Transfer (REST) software design criteria, focusing on a series of constraints to simplify and standardize web services development. Although these guidelines were not immediately adopted as the norm, they paved the way for today’s petabyte-scale web architectures. Before exploring the principles behind REST APIs and high scalability, an overview of how the web works is required, with specific attention to its most popular protocol — HTTP.

I. HTTP- the winged messenger

If the World Wide Web were explained through Greek mythology (its precursor in complexity), the HyperText Transfer Protocol would be the winged messenger Hermes. As the only Olympian god with the power to travel between the realms of the living and the dead, HTTP is the only protocol used to communicate a seemingly “living” client (web browser to simplify) and a “stateless” web server. The decoupling of client and server, where one is the requester of information and the other is the sender, is the first of the REST constraints.

Browsers like Chrome, Safari or Firefox are helpful examples to illustrate this data exchange, but a client can be any software tool programmed to request and receive information. Whether you click on an image that directs you to a new web page, or a retail site connects to a courier service during checkout, the client-server computing model remains the same.

Hypertext, hypermedia and web resources

In simple terms hypertext is text which contains links to other texts. The same concept applies to hypermedia, only that linking to other content is done through images, graphics, video and sound. The target of that link lives on a web server and is considered a web resource. It’s returned to the client in form of representations, thus the name “Representational State Transfer”.

Think of a resource as anything on the internet that should be identifiable to enable storing, retrieving and modifying it — a user account, blog post, shopping cart item, flight destination and so on. Resources are identified using unique strings called URIs (Uniform Resource Identifier).

For web tech initiates, we should start by looking at the most recognizable type of URI: the  URL (Uniform Resource Locator). Notice how URLs begin with “http://”. That’s our winged deity, indicating its status as the universally accepted protocol for locating a resource. The pantheon of concepts we’ve just sprinted through should become clearer once we look at what happens when you type a URL into a browser:

1. The client contacts the Domain Name System (DNS) to locate the IP address that is mapped to the requested URL.

*An IP address indicates where the resource that corresponds to that URL is located, i.e. the server that the webpage or web resource is hosted on.

2. Once the client knows which server to contact, it establishes a TCP connection with that server and sends an HTTP request.

3. The server processes this request and returns an HTTP response, which contains the HTML (HyperText Markup Language) page in the response body.

4. The response is rendered by the web browser and the solicited content, including text, image, video or sound, appears.

Voilá, mission (almost) accomplished. Under the strict specifications defined by HTTP and the REST architectural style, a client-server transaction can only be considered successful when it leaves no trace of any data related to the HTTP exchange on the server. This principle is referred to as “statelessness” and leads us to the next section for understanding REST APIs.

II. “A State for one man is no State at all.”


A stateful system is deemed so from the perspective of the backend server, which stores vital information related to the client session, such as user authentication, authorization and data validation. In REST however, all the information required to identify incoming requests is provided by the client. The stateless restriction stipulates that each client-issued request is handled as a single, isolated transaction. Client devices store and resend data, and the server cannot reutilize or rely on data from previous requests.

The impact this constraint has on a web services’ scalability is monumental, as the stateless protocol allows for load balancing. Incoming requests can be routed to any web server and the amount of web servers needed to match the expected workload can be scaled up or down.

In Fielding’s words: “REST ignores the details of component implementation and protocol syntax in order to focus on the roles of components, the constraints upon their interaction with other components, and their interpretation of significant data elements.”

In your average mortal’s words: statelessness is what enables scaling requests to multitudes of servers distributed across the globe.

III. Scalabilitas Opus Magnum: REST APIs

REST APIs function in a very similar way to common web transactions as they also use HTTP, only the data exchange occurs between two software definitions or products. No graphical user interface relays the result of the transaction since the client is usually a software program that requires limited human interaction.

Referred to as the “glue” that connects modern apps, a well-designed API or “Application Programming Interface” ensures competitive and relational distinction in today’s digital economy.

API methods are called when you create a playlist on Spotify, look up a profile on Instagram or make a purchase via PayPal. In fact, the wave of excitement surrounding APIs is largely owing to how they enable developers to build services that easily integrate with other, more powerful services. When you’re redirected to Facebook or Google to log in to a third-party application, API calls use these web giants’ authentication servers (OAuth 2.0) and access tokens to verify identity without revealing credentials to the external application, providing a safe method and more seamless user experience.

APIs’ building block potential extends far beyond personal verification data to finance, mapping, billing, mobility, sports, travel, farming and so on, resulting in the “softwarization” of an endless stream of services. In 2016, tractor manufacturers John Deere opened their API, allowing farm management and construction machinery companies to maximize profits by integrating crucial data into their applications. Thanks to API-generated data, the coffee mogul Starbucks has one of the most successful rewards-based loyalty programs, with over 16 million members. The driving force behind the growth of APIs for revenue creation is mass migration to cloud-based systems, with both digitally native and offline brands transforming their business models by leveraging REST and RESTful services (RESTful implies following most but not all the constraints).

Do all APIs adhere to the REST standard?

Negative. An API can be any interface layer that makes one application able to interact with another, but not necessarily over the internet. REST and RESTful APIs follow a standardized set of guidelines and always use HTTP. Basically, any process referred to as a “web service” can be considered an API but not all APIs are web services. The advantage of adhering to the REST design pattern is that the constraints themselves, summarized below, make for greater flexibility and reliability.

1. Separation of Client and Server

Based on a crucial principle in software engineering named Separation of Concerns (SoC), components are designed and developed to be independent, so changes to one will not affect how the others operate.

2. Statelessness

“No client context shall be stored on the server between requests”. When data related to the end-user (client context) is needed to carry out an authorized operation, it must be provided by the client in each request, making the server stateless.

3. Cacheable

Any response messages from server to client must be labelled as cacheable or non-cacheable. If the data is cacheable and hasn’t changed since the last response, it can be reutilized by the client. Caching increases an application’s responsiveness by improving client-side performance and reducing load and latency on the server.

4. Layered system

Also borrowed from the Separation of Concerns principle, layers of intermediary service can be implemented to help serve a client-issued request for a resource’s state. The constraint establishes that each layer can only communicate with the layer closest to it. If an authentication layer and a load-balancing layer are injected between the client and the end server, the client is agnostic to what these layers are or do, connecting only to the layer adjacent to it, improving system scalability and security.

5. Code on Demand

This (optional) constraint comprises the client’s ability to download and execute code which is returned from a server as an applet or script. Code on Demand temporarily enables extending a client’s functionality, but it’s not a mandatory feature for a web service to be considered RESTful.

6. Uniform Interface

The interface between client and server must be defined and designed to ensure that any machine trying to access data hosted on a server uses the same interface. To support achieving this constraint, the following sub-topics were included:

a. Resource-based ­ — requests to the server define the solicited resource state by including URIs (Uniform Resource Identifier)

bManipulation of resources through representations — responses from the server contain the necessary representation information of a resource to allow the client to change the resource state.

cSelf-descriptive messages — each request message must contain the exact information required to enable serving it, and the returned message must contain all the data and respective metadata, needed to understand it.

d. Hypermedia as the Engine of Application State (HATEOAS) — each response from the server should include the requested URI along with hyperlinks that inform the client of the options for changing the current state of the application.

The last topic deserves further attention as it’s perhaps the most convoluted and debated of Mr. Fielding’s standards. Once understood though, it powers a well-designed REST web service like Zeus’ lightning bolt.

IV. Do Believe the Hype(rmedia)

HATEOAS — Hypermedia AThe Engine OApplication State — key constraint

The World Wide Web was conceived as a virtual state machine where websites and applications continuously pass from one state to the next. The path that an application state follows is relative to the resource state, so distinguishing between the two is key. Depending on the HTTP method sent in a client-issued request, a resource can be created, retrieved, updated or deleted. Deemed CRUD operations, these actions correspond to the HTTP methods:





When a resource is modified on a server as the outcome of a CRUD operation, a different representation of that resource state is returned to the client, and the application state also transitions. Although the client context exists separately from the server-stored resource state, their respective transits are enmeshed. How hypermedia functions as the engine that determines as much is our next concern.

HATEOAS manifests that the resource representation returned by the server must include a series of follow-up links in hypermedia format, along with standardized link relations. As mentioned before, hypermedia refers to media that interactively allows hyperlinking to other data sources, and comprises text URIs, audio, video and images. To simplify, let’s condense hypermedia into “clickable items” that allow users to navigate from page to page. In a web-browser-as-client scenario, this isn’t too hard to grasp. Applied to a REST API where the client is a software tool, the abstraction gets pretty darned abstract.

On its HATEOAS page Wikipedia uses a banking application sample response for an HTTP “GET account” request. The server-issued code itself (copied below) helps elucidate how the application state is determined through the actions afforded by “clickable items”.

Example of HATEOAS (from Wikipedia) — Banking App

The “account” resource representation incorporates hypermedia links with the options to make a deposit, withdrawal, transfer funds or close the account. These options are traversed by the user in the form of buttons, icons, hypertext and so on. In the response, not only is information being shipped (such as the current balance) but instructions for the resource’s next state are offered. The ensuing client-side action determines what happens to the resource on the server-side, triggering a subsequent shift in application state. Thus, the hypermedia sent in the response drives the application state and not vice-versa.

In a REST API software-to-software transaction, the process mimics human interaction with a web app, but it’s the REST client that uses server-provided hypermedia URI links to access the resources it needs.

“Hypertext does not need to be HTML on a browser. Machines can follow links when they understand the data format and relationship type”. Roy Fielding

How will you benefit from knowing any of this?

The Geek Mythology Guide to REST APIs provides a basic intellectual framework for web APIs and the REST design pattern. With API-powered embedded financial services achieving skyrocketing valuation for new fintechs, and companies like Salesforce acquiring Mulesoft (an API management platform) for $6.5 billion, it’s no surprise that businesses everywhere are scrambling to implement an API strategy. Opening access to critical information enables customers to tailor their interactions with a product, while companies can also monitor API usage to better understand customer behavior.

At this rate, your not-too-far-into-the future-car is already using APIs to deliver automated updates on everything from insurance to mileage and repairs (while it drives itself). Anarchic scalability has forever transformed how we interact with the external world, so you can now pride yourself on understanding the technologies that made it possible.

Fun facts I purposefully left out:

  • On top of authoring the REST design pattern, Roy Fielding co-authored the HTTP specification, co-founded the Apache HTTP Server Project and chaired the Apache Software Foundation, the largest open source project on the planet.
  • The World Wide Web began in 1989 as a non-profit project at the European Organization for Nuclear Research (CERN). By August 1991 Sir Tim Berners-Lee and his CERN colleagues had invented HTML, HTTP, URIs, and the first web client and server. Within 5 years the internet expanded to 40 million users and its ability to scale became a matter of serious concern. In came Roy Fielding with the constraints that made Web history.
  • HTTPS was developed to stop sensitive data being intercepted and compromised during web transactions. The added “S” stands for “secure”. HTTPS encryption technology and authorized security certificates are now used in over 50% of websites worldwide, avoiding potential dangers like phishing, extra advertisement/tracking ads being added to the sites you visit by your internet service providers and governments gaining confidential browsing activity information.
  • HTTP and HTTPS are considered limited protocols for IoT (Internet of Things) applications and other application-layer protocols have been developed as an alternative.
  • Hermes was also the god of trade, wealth, luck, language, thieves, and travel, all facets of the internet’s potential. It was believed that while still a baby, he stole 50 cows from his half-brother Apollo.

WRITTEN BY Mercedes Arias-Duval

7 Ways a VPN Makes Life That Much Cooler

VPN stands for virtual private network. Essentially, anonymity and security while browsing the internet are its main attributes. In this article, we’ll be looking at some surprising and very handy uses of VPNs.

Some basics: When you do a search, every request you make passes through your ISP (Internet Service Provider) before reaching the server that hosts your destination website. All the data in this exchange is unencrypted, meaning your ISP can read it. What’s worse, they can also hand your activity logs over to third parties, like government agencies or advertisers.

Envisage VPN as a safety channel which keeps the traffic between your computer (or any mobile device) and the site you visit, completely anonymous. Instead of connecting to that site’s server directly, your computer connects to the VPN and all resulting exchanges are safely held within that secure connection.

In other words, you function as if you were on the same local network as the VPN. An encrypted request is forwarded to the website via the VPN and the response is likewise forwarded back to you.

Now, let’s get to the meat of some nifty ways to use a VPN.

1. Access Geo-Blocked Websites

Your VPN connection will let you use the Internet as if you were connecting from the VPN’s location. Let’s say you’re traveling and you want to watch a documentary premiering on Netflix or BBC’s iPlayer. There are VPN services which have multiple servers located in several countries, giving you the ability to choose where you appear to be connected from.

2. Dodge Targeted Ads

Who wants to be easy prey in the creepy era of personalised targeted ads? Information regarding your food and music preferences, where you shop, or general health issues, are just some of the data that can be gathered about you. Unfortunately, there is no official regulation overseeing what can or can’t be handed over by your ISP to aggressive online marketing corporations. To make matters worse, the US senate recently voted to allow ISPs to sell your browsing history to advertisers, by eliminating privacy rules which would’ve required your previous consent.

Yep, ISPs can now build a detailed profile of their customer’s viewing and listening history. HTTPS helps reduce the Big Brother style mapping, but they will still be able to see that you visited a particular domain. With VPN though, traffic encryption means anytime you listen to a podcast, or look up the nearest pharmacy, you’ll appear to be at your VPN’s IP address, instead of your own.

3. Browse from a park, a bus, or a sushi bar without getting hacked.

Hackers are always going to come up with new malicious programs, viruses and more threats. The insecure connection implicit in public networks such as WiFi can be mitigated by passing through a VPN connection. If an attacker tries to gather sensitive data, what they would see instead are the incomprehensible characters exchanged from the end user to the VPN server. Antivirus and firewalls do a relatively good job at keeping us safe, but VPN adds an extra layer of encryption between end users and the big bad wolves lurking around the internet.

4. Optimize Connectivity Speed

ISPs around the world purposely slow down popular streaming sites. Both Youtube and Netflix get throttled to reduce bandwidth usage. This is sometimes referred to as ¨traffic shaping¨, and it really, really slow things down. Other targets are MMO games such as Minecraft or World of Warcraft.  At the outset, this practice poses the advantage of fighting internet traffic congestion. Ultimately though, it hampers with connectivity and users even end up paying their ISP more money for increased speed.

By utilising a VPN, you save yourself a lot of hassle caused by traffic shaping, as your ISP won’t be able to detect that you’re connected to these sites.

5. Talk to your Friends and Family Abroad for Next to Nothing

Spending time away from home? We all know that with Skype or any other VOIP (Voice Over the Internet Protocol) long distance calls are more affordable than calling direct. Nevertheless, depending on the country you’re calling and how long for, rates can get pricey. By connecting to the service via a VPN which shares server location with the destination you’re dialling up, the cost becomes equivalent to making a local VOIP call. Sweeeet.

6. Outsmart the Airlines

We’ve all had that moment when you get excited about the price of a flight, only to find that after checking out a few more options, the original fare goes up significantly.

Blasted cookies!? Actually, it’s not just the multiple searches for ¨Istanbul¨ on different airline and travel sites that are foiling your ultra cheap ticket. Geo-location and newer forms of data-collection are getting increasingly sophisticated.

If you’re in a country which gets targeted with higher fares, you can avoid geo-location pricing by connecting to a VPN service that is located elsewhere. Added to that, encrypted browsing means you can totally bypass the airfare data-profiling tug of war.

7. Safer and Faster File Sharing

It’s no secret that VPN connections are used by many to download files from Torrent services. There is a legal way to do file sharing though and we can only hope that’s how you roll. However, if you think your good intentions on Torrent sites can’t be monitored by government agencies, such as the NSA, think again. Nobody wants to be blacklisted by these guys or their equivalent around the world. The most sure-fire way to keep your identity from being disclosed is by using a VPN.

Also, as mentioned above, ISPs often throttle very popular services and that includes Torrent sites like BitTorrent. So not only do you stay safer, you also get improved speeds when sharing files via a VPN connection.

I’m converted. Now where do I sign up?

Getting a VPN is fairly straightforward now as most providers offer one-click installation. If you have any VPN life hacks you would like to share, please add them  in the comments section below.

Why you’re nuts not to adopt continuous integration and deployment cycles

Continuous delivery is the next phase to guarantee automating of all the necessary pre-deployment steps. We don’t ever want the lower lead time to result in increased re-work time do we? Instead, let’s aim for a smooth path where each integration complies with release criteria to update the new code on a live application or website.

Need For Speed

¨I want this code in live production faster to get my UX numbers soaring.¨

Don’t keep this phrase in your musty drawer of unmet goals. Continuous integration, delivery and deployment could be the differentiators that’ll make you ace the summit on that low to high web performer hike.

A software delivery lifecycle can take anything from weeks to months. Today’s top performers automate the build and deployment, environment provisioning and testing processes to open up the possibility of focusing on better products which yield higher returns.

The first component is Continuous Integration (CI). When developers merge their code to mainstream branch or working code base as often as possible you’ve got CI.

It’s basically triggering a build whenever a change gets committed to the source code. In other words, the developer fetches the code from the source code repository, compiles it and runs automated tests to create a build. This gives you full visibility of the project code and forges the way for earliest possible error detection.

The prime factor here is risk reduction.  Risk is majorly reduced when creating a build after running automated test for every commit. With each integration, bugs and errors are easy to find and easy to fix–making the build every more simplified.

Let’s break down how drastically lowering your lead time will simplify life and enable you to cash in on the goods.

Work in small batches and get feedback sooner from users before spending long amounts of time and resources on each integration. Agile web development is also a good way to see profits sooner as the delivery lifecycle is based on a product in live production. This can be even be integrated into A/B testing  to help you decide upon final implementation (more on that later).

The hypothesis-driven approach to product development reduces possible costs of building out whole features without knowing for sure what is preferred by the user.

Therefore, you see which features bring you ROI and measure the differences in performance for each.  You can also lower the fixed costs that a release process involves by having a build/ test and deploy pipeline that relies on automation, thus bringing down costs associated with delivering incremental changes to software in the traditional release process.

So continuous delivery is hot. But to go the whole hog and stand out in this business you need to bump that up a notch toward continuous deployment. Say unit test, platform test, and staging integrations are all automated, meaning that you have effectively achieved continuous delivery but deploying to production is still a manual, painful, time consuming procedure. Well, what we’re going to look at next is when automation takes over this step too.


Deployments should be low-risk and performable on demand. The new functionalities do need to be tested and controlled though. A continuous delivery pipeline will apply patterns like blue-green deployments that significantly reduce any possibility of relative downtime.

Safety and QA are two of the staples that continuous deployment instrumentation looks after.  An infrastructure that lets you back out new feature when a certain defect has been overlooked by the automated process is required to guarantee successful continuous deployment.

So to  ensure that live users on an application have new code running for them error free, we need instrumentation to foresee that the automated integrations do not churn out a poor result. Such an external instrument should immediately interrupt the process and roll back any updates­–meanwhile notifying the developer(s).

Fast regression detection made possible by automated tools can save individual web professionals or entire teams heaps of time allowing them to focus on usability and coding additional new features.


Simply cranking up the frequency of deployments to the max should by no means be considered a foolproof way to ensure quality.  You have to ascertain you’re working with the right target and avoid common mistakes in agile practices

Here’s an enjoyable graphic representation of agile practices which is about as easy to navigate as the Tokyo subway map:

Simplify your life and improve your dev wellness

Continuous delivery and deployment improves the overall process by allowing teams or individuals to better understand which resources are going into the right features and which aren’t.  The continuous approach is improving UX on a global scale, reducing levels of peer frustration and improving overall zen dev–yes, it’s a thing. Ommmm. 

What is the most important sweet spot shared by all of us? Happy users.  By automating software delivery cycles to allow developers to focus on building great products we are entering a new era of application and website development.


From Puppets 2016 State of DevOps report :

  • High-performing IT organizations deploy 200 times more frequently than low performers, with 2,555 times faster lead times.
  • They have 24 times faster recovery times and three times lower change failure rates.
  • High-performing IT teams spend 50 percent less time remediating security issues.
  • And they spend 22 percent less time on unplanned work and rework.
  • Employees in high-performing teams were 2.2 times more likely to recommend their organization as a great place to work.
  • Taking a lean approach to product development (for example, splitting work into small batches and implementing customer feedback) predicts higher IT performance and less deployment pain.


Barcelona’s top tech events for 2017

Makers fairs, Fab Lab, Mobile World Congress, Big Data and IOT meetups are just a taste of what makes Barcelona a technology freak’s haven.

People from all over the globe flock to the Catalan capital’s sunny streets to get a feel for a place that goes from ever-so stylish to grungy in just a few blocks. Despite the bachelorette revellers, stag night lads, and skaters galore, one thing that can’t be denied how much is being invested to make Barcelona something more than a cruise stop destination.  Yes, a newer type of tech tourism, entrepreneurship and local innovation initiatives are giving BCN a geek chic edge that is taking Europe by storm.

Contentcult is delighted to share the scoop on this year’s most happening technology-related events. From high profile events like the Mobile World Congress or music and technology festival Sonar and  to lesser known 3D printing fairs, these happenings will give your calendar that in-the-know oomph.

Tech Experience Conference Nov 2017

Smart City Expo  November 2017

Mobile World Congress March 2017

Sonar June 2017

IOT World Congress October 2017

4YFN  Ongoing

Barcelona Maker Faire June 2017

In3dustry 3D printing event October 2017

Barcelona Games World  October 2017

Gamification Think Tank Meet up


After  last year’s loved up buzz over our very own Cuitat Comtal hosting the 4th Gamification World Congress, 2016 brings us monthly Think Tank meet ups to keep BCN gamifiyers linked up and yapping  game dynamics.

Focusing on socially conscious gamification methodologies for education and civic good,  the meet ups also bring together individuals who work within this field or might be needing to incorporate some gamification elements into their projects.

Discussions are geared towards understanding the expanse to which game dynamics can reach and breach mental lulls spurred on by mundane task completion or stagnant structures of learning. We look at what’s happening in other cities and try to gain insight from models of game-like services that are working.  Whether digital or IRL, well-designed games and play have the ability to trigger our instinctual wish to be in flow.  Pooling of knowledge and resources at Barcelona Gamification Think Tanks is open to all.

RSVP the next meet up here:

The Human Library



The Human Library is a worldwide ongoing project where the reader takes out a human book.   With the aim of breaking down stigma and preconceived notions of members of marginalized communities, Content Cult is bringing this event to Barcelona’s historic centre this spring/summer 2016.

Real people – Real Conversations. Stay tuned for when and where!


The Human Library es un proyecto en donde el material prestado es un libro humano.Se ha realizado en más de 70 países durante los últimos 15 años.  Rompiendo barreras que crean discriminación.  Content Cult está trayendo este evento a Barcelona para primavera/verano 2016.

Gente verdadera – Conversaciones verdaderas.  Pronto publicaremos donde y cuando.



The Human Library projects around the world:

BCN Gamification Think Tank 1

bartles 1

March 30th 2016 marked the first Gamification Think Tank Meetup in Barcelona.

The first monthly Think Tank was dynamic and full of lively banter.

We were fortunate enough to enjoy beautiful modernist surroundings for our first location, where slides of player type frameworks dominated the buzz around the first half  of our session.

Are Bartle’s player types to be applied to gamification at all? Differentiation between MMO/gamers and users of a gamified solution or service.

Initial conceptualising on flow took us to ask questions such as: What entices the user to engage in an activity that is difficult but rewarding? What type or rewards, intrinsic and extrinsic, encourage us to continue?

The second half of our Think Tank focused on positive projects ranging from environmental to educational:

What’s happening in other cities?

Environmental gamification services like Recycle Bank.


Children’s attention capacities.

Games created by kids, game like solutions for math and reading.

Philanthropy games for science and healthcare.

Check out the links below: