Understanding POST method: The Create Operation of REST world

Create, Read, Update and Delete or CRUD, for short, forms the fundamental operations/functionality provided by any service. In REST, a POST request is used to add a new entry into the collection represented by a resource. In other words, to create a new item, REST makes use of POST HTTP method. In this post, the focus will be on what POST is.

To understand POST, it is essential to understand what it expects from a request, and what kind of response it can provide including the headers. Hence, in this post we will be discussing in detail about what entails a POST request and response. The next post in this series will focus on how to implement create functionality for a REST service using RESTEasy and POST method.

What is POST?

POST is one of the most commonly used methods of HTTP. Whenever you click submit button on a registration form or an application form, you are sending a POST request to the server. In the world of REST, POST is essentially used to create a new entry within the collection represented by a resource. As with any HTTP method, to understand how it works, we have to understand what a POST request and response contain and what they mean.


According to the specs, “The POST method is used to request that the origin server accept the entity enclosed in the request as a new subordinate of the resource identified by the Request-URI in the Request-Line”. The important part of the definition is “accept the entity enclosed in the request as new subordinate”. It tells us the following:

  • The request will include an entity. This is important point for validation and validation related response.
  • The server must accept the enclosed or included entity as an item that can be added to the collection of the resource identified by the URI. This is important because it means server cannot process it as a part of a different resource.

The aforementioned points brings out the query – how will the server know the format (or MIME type) of the entity? To answer this we have to look at the parts of a POST request. They are header and body.


It is the information sent to the server detailing what the client wants and what it will accept as response. Basically, it contains the following:

  • Method – the HTTP method to be executed by the server. In case of POST request, it, obviously, is POST.
  • URL/URI – the identifying URI of the resource.
  • Version – version of the HTTP, either 1.0 or 1.1
  • Content-Type – MIME type/format of the content. For example, if the format of the entity is XML, Content-Type will be either text/xml or application/xml.
  • Content-Length – the length/size of the enclosed entity or content.

An example of a POST request header with as URI and application/xml as Content-Type will be

POST /book HTTP/1.1
Content-Type: application/xml
Content-Length: 19


The entity (or data) to be processed is send as the part of the body of a POST request. The body contains data of the format specified in the header. One point to keep in mind is that if the format of the data in body does not match the format specified in the header, server can reject the request. For example, if the header specifies type as application/xml but body contains XML, server can reject the request.

Following is an example of a POST request with header and body:

POST /book HTTP/1.1
Content-Type: application/xml
Content-Length: 100

<?xml version=”1.0”>


<title> The invisible man</title>


A REST message is consists of a request and corresponding response. We have discussed POST request. Next, let us look at POST response.


The POST response tells the client whether the request has been successful or not and optionally the entity that was created or the reason for failure of the request. The main points to keep in mind about the POST response are:

  • If an entity has been created as a result of the processing of the request, the server should return the entity as part of the response body. The header should contain the URI specifying the location of the entity.
  • The response is not cacheable, by default unless the cache controls are present.

Similar to the request, response also contains header and a body. However, in case of response, body is optional.

Response Header

The response header provides details of the result of request processed. The most commonly used headers are:

  • Cache-Control

It defines the cache control parameters such as the maximum amount of time for which the response can be cached, server from where to retrieve the cache etc.

  • Content-Type

This header tells the client what is the format or MIME type of the content in the body.

  • Content-Length

It is through this header that the client understands the size of the content within the body.

  • Location

The client uses this header to read the URI of the newly created entity.

  • Status

It tells the status of the request. If the request has been processed successfully, the status will be 201 meaning created.

Among the above, Status and Location are most important since they tell us whether the request is successful or not. And if the request has been processed successfully, what is the URI of the newly created entity.

An example of response with status as created and with http://localhost:3000/book/1 as location will be


Location: http://localhost:3000/book/1

Response Body

The body part is optional for POST. It will be present only if a new entity is created. Else the body will be empty. The format of the body, if present, should be same as the one requested by the client through the Accept header of the request. That is, if Accept header of request is application/json then the Content-Type of the response header must be application/json and the body must have the data/entity in JSON format.

That brings us to the end of this post. In the next one, I will detail what steps are required to leverage RESTEasy for creating a REST service, how to setup RESTEasy based project with maven and how to deploy as well as test the implemented functionality. Till then…

A Brief History of Astronomy

The following has been translated from the book "Key to Nature" 
published by Kerala Sasthra Sahitya Parishad.
This is the second draft.

Astronomy is as old as civilization. The first humans who looked at the vast sky and the twinkling stars in amazement were the progenitors of Astronomy.

Ancient civilizations followed the Geocentric Theory of Universe. The  ancient Greek civilization created a model of the Universe by calculating the motion of heavenly bodies. The model of the universe proposed by Claudius Ptolemy was the result of the meticulous work done by the Greeks.

The ancient Babylonians were the first catalogers of the stars. By around 1600 B.C., they had more or less recorded the motion of planets. By 800 B.C, they were able to record movement of planets with respect to distant stars. The Babylonians were able to, by and large, understand the recurring characteristic of planetary motion. Astronomy was controlled solely by the Babylonian priests. They were uninterested in finding explanations for their observations.

It was the Greeks who tried to create a model of universe that was based on geometry. Pythagoras (582 B.C. – 497 B.C.), a Greek philosopher, tried to explain the universe in mathematical terms.

The Homocentric model of planetary motion put forth by Eudoxus (408 B.C. – 355 B.C), who was a student of Plato, needs special mention. The model by Eudoxus tried to explain the periodic behavior in the motion of heavenly bodies by proposing that they moved uniformly along concentric circular paths.

Aristarchus theorized that it is the sun and not the earth which is the center of the universe. He debated that the stars and planets rise and set due to the motion of the earth around the sun and not the other way round.

Ptolemy’s period is considered the pinnacle of ancient Greek astronomy. It was the early period of Second Century B.C. Almagest was Ptolemy’s most famous creation. The model of the universe that he presented in the Almagest was relevant till the time of Copernicus. The time that passed between Ptolemy’s presentation of his model and the model put forth by Copernicus was around 1400 years.

However, the model of the universe that ancient Indians had conceptualized was vaster than any of the aforementioned models. In the Pranava-Vada, authored by sage Gargyayana*, the size of the universe theorized is comparable to the modern understanding of our universe.

The foundation of modern astronomy was laid in the era of Copernicus. His observations contradicted the postulates of Ptolemy’s model. He presented the Heliocentric model of the universe in 1529. The rotation of planets and their revolution around the sun were parts of this theory. He had postulated that the path of planetary motion was circular. He theorized that the rotation of earth caused day and night as well as the nightly occurrence of heavenly bodies; revolution around the sun was the cause of yearly motion of the sun; that certain planets had a different revolutionary path around the sun was the cause of their retrograde motion.

Tycho Brahe, who is considered to be the father of observational astronomy, did not consider the Heliocentric model of Copernicus to be correct. However, Kepler, who was the assistant of Brahe, concurred with the Copernican model. He invented the famous laws related to the planetary motions. Galileo Galilee, who was a contemporary of Kepler, used the telescope to study the paths of heavenly bodies. He studied the surface of the moon. He found that Venus waxes and wanes. He discovered the first four moons of Jupiter. He supported the Heliocentric model of Copernicus based on his observations. After Galileo, Sir Issac Newton discovered that it is gravity that controls the paths of planetary motion. He explained the properties of planetary motion using mathematics. He also proved Kepler’s laws of planetary motion using mathematics.

The foundation and advance of modern astronomy is due to the invaluable contributions of brilliant scientists such as Copernicus, Kepler, Galileo, Newton and so on. Along with the contributions of the aforementioned scientists, the invention of telescopes and clocks laid a strong foundation for the progress of modern astronomy. The advent of telescopes helped scientists look more and more into the depths of space. The telescope invented by Galileo was based on refraction of light. In 1670, Newton invented a better type of telescope that worked on the principle of reflection of light. As science progressed, various other telescopes were invented that worked based on all the components of electromagnetic spectrum such as infrared, X-rays and so on and so forth.

A type of telescope that could be used to study the radio waves emitted by heavenly bodies was the brain child of Karl Jansky of Bell Telephone Laboratories. Radio telescopes gave birth to a new branch of astronomy known as Radio astronomy. Advances in the understanding of outer space led to the advent of telescopes that could see into the farthest reaches of the sky. This resulted in advanced research into fields based on other radiations including X-ray, Gamma ray and Ultraviolet rays. The end of the 17th century saw the invention of pendulum clocks, which brought forth better clocks that were able to measure time periodicity more accurately. The discovery of laser and maser were the reasons for such accuracy in measurements.

Doppler Effect states that the relative motion between the source and the listener is the reason for the change in frequency of the wave emitted from the source. Vesto Melvin, an American scientist, applied the Doppler Effect on light traveling from galaxies to explain their shift in the spectral lines. It paved the path for a new mechanism to measure the distance of faraway galaxies. This resulted in the discovery by Hubble that galaxies are moving away from each other at a constant speed. Hubble’s discovery coupled with Doppler Effect helped in measuring the distance between galaxies accurately.

Edwin Hubble’s discovery that the universe is expanding became a milestone for astronomy. In reality, the theoretical context for such a discovery was already in place. Einstein’s theory of Relativity predicts an expanding universe. However, Einstein himself did not believe the prediction. That’s why he introduced the Cosmological Constant. At the same time Alexander Friedman and Georges Lemaitre went on to study the evolution of the expanding universe. The theories put forth by Einstein and Max Planck on the properties of black body radiation proved to be important in understanding the genesis and evolution of an expanding universe. Black body radiation refers to a system or an object that absorbs the entire radiation incident on it and then re-radiates it without losing any radiation in the process. The radiation that escaped during Big Bang follows the model of black body radiation.

The black body radiation theory was empirically developed by George Gamow. George Gamow, Ralph Alpher and Robert Herman studied the problem of how elements were formed in the nascent universe. Fred Hoyle and William Fowler worked on the nuclear synthesis within stars. According to Gamow, all the elements were synthesized during Big Bang. However, according to the calculations of Hoyle and Fowler, the heavy elements are synthesized within the cores of stars and during supernova explosions. Based on this, the Steady State Theorem was put forth by Boyle, Thomas Gold and Hermann Bondi. According to it, the universe has no beginnings or ends. It was always in the same state as we see it today. Nucleosynthesis of elements was proved to be correct. This was the only area that Big Bang Theory had to improve upon. Discovery of background radiation boosted the acceptance of Big Bang Theory. In the near past, proofs supporting the Inflationary theory have been discovered by Microwave anisotropic probe. Astronomy has truly become a branch of science.

Many exotic objects Neutron stars, binary star systems, pulsars, quasars, black holes have been discovered. Astronomy, which once depended entirely on the visible spectrum of light, now makes use of the entire spectrum of electromagnetic radiations such as gamma ray, X-ray and so on. The physicists who study elementary particles are providing significant information to astronomers. And this transfer of information is happening both ways. Today, astronomy and hence astrophysics is one of the most flourishing branches of science.


Introducing REST and RESTEasy

Reusability and distribution of logic to self-contained units have been the driving force behind the development of technologies such as DCOM and CORBA. However, complexity and platform dependency have marred these from being standard technologies for creating and consuming distributed and reusable systems. That’s why concept of Web Service came into prominence. Since, it is based on XML and HTTP, Web Services soon became most used standard for implementing distributed systems that cut across languages, platforms and technologies. There are two types of Web Services – SOAP based and REST based. This series of posts will focus on developing Web Services based on REST using Java.

REST is short for REpresentational State Transfer. This post concentrates on the whys and wherefores of REST. First section would focus on the basics of REST. The second section would be about difference between SOAP based Web Service and REST based Web Service. The last section would introduce Resteasy that can be used to develop REST based Web Services.

REST – the whys and wherefores

REST, short for REpresentational State Transfer, is an architectural principal using which we can develop stateless web services that can run over HTTP and clients developed in different languages can access them just as they access any other web page. In REST, every object accessible via HTTP is a resource and each of these resources can be accessed via a Uniform Resource Identifier or URI.

If you consider the full form of REST, there are two main parts – representational and state transfer. The former relates to the resource itself and the latter relates to the client. Let us say, for the sake of example, there is a library management service that can be accessed using the following URI http://library.contessa.com/rest. It contains a resource 1695 that provides details of a particular book. Clients can access it using the URI http://library.contessa.com/rest/1695. The response is in the form of an HTML page. The HTML page is a representation of the resource 1695 which is provided by the service. The resource can have many representations – HTML, XML, JSON etc. Once client receives the response, which in our example is HTML, places the client in one state.

Next, let’s say the HTML contains a link to another resource – the details of the author. If the client traverses to the author resource using the link, client is placed in a different state. So, in short, whenever the client traverses to different resources or different representation of same resource its state is transferred from one representation to another. Hence, the term REpresentational State Transfer is used for such services that are centred on resources, representations and state changes.

Now that we have the basic concept behind REST, let us look at the most common terminologies used in REST, which are:

  1. Resource: Anything that can be accessed using URI is termed as a Resource. Scripts that return records from database, images, web consumable slides etc. can be an example of a resource.
  2. Representations: Representation means in what formats a client can request a resource. If a Resource can be represented as XML and HTML both, then XML and HTML are its representations.
  3. Methods: The way by which client communicates with the server to perform certain operations is define by Methods supported by the resource. Since, REST uses HTTP as the protocol for communication, so a resource can support all or subset of the methods provided by HTTP. For example, in HTTP you can use GET, POST, PUT, OPTIONS, DELETE and HEAD methods. It is up to the resource whether or not to support all of the aforementioned methods.
  4. Messages: Each request and response is a message. One important point to keep in mind is that the messages should be self-contained. For example, a response containing the details of a resource with id 6743 should contain everything related to that resource. Client need not to wait for a second response to have complete data about the same resource.
  5. State and session: The currently sent representation is the State of the resource. If the client needs to track what was the state before the current one, it will need to take implement session at its end. In REST, server is only concerned with the state of the resource and not of the client.

So, if we want to define characteristic features of REST on the basis of what we have discussed so far, the features will be:

  1. It is an architectural style and not a framework or toolkit.
  2. It is not a standard. However, it uses standards for communication (HTTP), representations (XML, JSON) etc.
  3. It makes use of pull based client-server interaction style. Client requests (pulls) a representation of a resource from the server. Server does not send the representation unless a client asks it for the representation.
  4. It is stateless. That means server does not keep track of the requests it receives. It is client’s responsibility to provide all the required information in the request.
  5. The responses are cacheable. The server must mark each response as either cacheable or non-cacheable so that client can take advantage of caching mechanisms to improve performance.
  6. The interfaces for a resource in REST must be generic. In other words, two resources must be accessible to client using the same methods (GET, POST, PUT, DELETE etc.).
  7. All the resources must be named using URI.
  8. The resources can be interconnected using URI through which the client can move from particular representation of one resource to another representation of a different resource.
  9. It supports layered components such as proxies, gateways etc. so as to implement security, increase efficiency etc.

The next natural question that you will be having is how REST differs from SOAP based services. We will be tackling that in next section.

REST and SOAP based services – the differences

There is vast amount of difference between SOAP based services and REST. The major differences can be described using the following points:

  1. Transport Protocol
  2. Based on RPC
  3. Standards
  4. Persistence of state
  5. Uniform resource

The main points that make REST and SOAP different are the second and fourth points. Following are the details.

  1. Transport Protocol: REST is dependent on one transport protocol which is HTTP. SOAP based services can be used with variety of transport protocols.
  1. Based on RPC: REST is not based on RPC. Due to this REST can make use of generic interfaces. SOAP itself is RPC based. So each service will have its own methods and interfaces. This results in making the interfaces generic enough so that toolkits and clients around the services a tough undertaking.
  1. Standards: REST uses the existing web standards. It does not have its own standards. This helps in creating services easier as a different set library and toolkit is not required. SOAP based services has their own standards including but WSDL, SOAP, UDDI etc. So to create a service using SOAP you will require a minimum set of libraries to parse WSDL, understand SOAP and register itself with UDDI.
  1. Persistence of state: REST is stateless. Server does not keep track of change in state. So no handling will be available at server side. SOAP based services can handle sessions.
  1. Uniform resource: In REST access to each resource and methods to access the operations supported by it must be a uniform across all resources. For example, each resource is addressable using a URI. There is no intermediary that maps the resource to URI. In SOAP based services, uniformity of access to the resources and methods to the operations supported by the resource can vary from resource to resource.

The points described above are not to tell you which of them are better, rather to bring out the differences between the two types of web services that are currently most common. With that let us move to next section which will introduce you to RESTEasy.

RESTEasy – What is it?

In Java, all the frameworks that provide functionality to implement REST based applications are implementations of JAX-RS specification. RESTEasy is no exception. It is a portable implementation of JAX-RS. There are two versions of this specification – 1.1 and 2.0. All the RESTEasy versions prior to 3.x implemented JAX-RS 1.1. From 3.0 onwards RESTEasy has implemented JAX-RS 2.0.

The main features provided by RESTEasy are:


It can be used in any application server that runs on JDK 6 or higher. For example, RESTEasy based application/web service built for JBoss AS can be deployed in Glassfish.

Client framework:

RESTEasy has client framework. It leverages JAX-RS annotations using that a developer can write HTTP clients easily. One thing to keep in mind is that JAX-RS only defines annotations for server implementation only.

Client cache:

It supports caching semantics that includes cache revalidation. This ‘client cache’ is browser like cache that can be used by applications making use of RESTEasy client framework.

Server cache:

RESTEasy provides server side cache that is in-memory cache and caches the generated responses. It is local response cache since it sits in front of the REST service. And due to this RESTEasy can automatically handle ETag generation and cache revalidation.

Providers for common media types:

The most common media types used for data transfer in REST services are: XML, JSON, YAML, Multipart and Atom. RESTEasy has providers that marshall to and unmarshall from these media types.

 Interceptor model:

Interceptors provide way to process requests and response before the request is passed to the business method or response is returned to the client. RESTEasy provides interceptors that can be used to either work on the body of request and response or on the request and response themselves.

 In the coming chapters I will be focusing on how to use the above mentioned features and when to use them. Till then…

Arjun, Without a Doubt – Review

Story – the two minute version

The book starts with the POV of Arjuna in the forest where the Pandavas and Kunti are living incognito after escaping from the house of lac. Then the scene shifts to Panchal where the kingdom is preparing for Swayamvar of Draupadi. Here, the readers are introduced to Krishna and Draupadi’s through her own words. From there on, each event in Mahabharata is portrayed from the POV of either Arjuna or Draupadi.

What I liked

  1. The relationship of Arjuna and Krishna. In any book related to Mahabharata, the first thing I look for is the portrayal of Parth-Madhav. The author has done full justification in this regard. I would dare say that this is first time an Indian author has depicted their friendship as it is in the epic.

  2. Arjuna’s dedication to archery. Dr. Shinde beautifully depicts the sweat and blood Arjuna had shed to become the peerless archer that he was. Majority of authors forget this aspect of Arjuna.

  3. Lord Indra’s pride in the victories of Arjuna and his love for Arjuna.

  4. Karna’s one sided rivalry with Arjuna. In the epic, Arjuna had only one rival – he himself. However, with the emergence of Karna as a tragic hero in Indian literature, this aspect of Arjuna is selectively forgotten. That is not the case with Arjuna, Without a Doubt. There is a long conversation between the three Krishnas that makes it clear that Arjuna neither considered Karna as a rival nor was intimidated by him ever.

  5. The unique relationship between Arjuna and Draupadi. In the epic, most of their conversation happens through eyes, smiles and sarcastic banter. In this book, these translate to explicit conversation. And that makes understanding their relationship much easier.

What could have been better

  1. Arjuna’s thoughts about Khandav-dahana. This is not first time that an author has shown Arjuna as traumatized for his role in burning of Khandava forest. However, nowhere in the epic, this has been mentioned.

  2. Facts about Gandiva, specifically the fact that nobody except Krishna and Arjuna could lift it. Draupadi could never have taken it to her room.

  3. Reason for Arjuna’s silence during dice hall incidence. I wont spoil it for anyone who has not yet read the book. However, I feel a better reason could have been found in his inner struggle between cold logic and emotions.

  4. Portrayal of Subhadra. In the epic, the only person for whom Arjuna openly declares his love is Subhadra. That could have been taken into consideration.

  5. Darker shades in the portrayal of other Pandavas and Kunti. I am not going into details as that could spoil many of the twists in this tale. The portrayal could have been more balanced.

Overall Rating

4 out of 5. A must read for any fan of Arjuna and Arjun-Draupadi fan.

SDL Programming in Linux: Getting Started with OpenGL

SDL is the foundation on which a game could be built without much ado. However, SDL is not complete in itself. It just provides services using, which, interaction between various components of a game/simulation as well as the games interaction with OS becomes seamless. If there are no components to utilize these services, then these services become just proof of concept. In a gaming engine, most of the time, these services are required by the rendering and AI components. From this part onwards I would be concentrating on the rendering component and its interaction with SDL. I will be covering the AI component in the future. Though SDL supports other graphics libraries, its usage with OpenGL is more common. The reason is that SDL and OpenGL fits like parts of a puzzle. So most of the time, the rendering component, or the rendering sub-system (I would be using this term from now onwards) of a gaming engine is built upon OpenGL. Hence understanding OpenGL is a must to build a good rendering sub-system. This part and the articles coming in the near future would be detailing the different aspects of OpenGL along with how SDL helps in creating a good framework for future purposes. In this part I would be providing whys and wherefores of OpenGL. The first section would detail about the whys and wherefores, second section would detail the steps in creating a basic application whereas in the second section I would be creating a framework using SDL that can be used in the future. In the same section, I would also use simple OpenGL routines to test the framework. That is the agenda for this discussion.

OpenGL- What is it:


If this question is asked, then the most common answer one would get is that OpenGL is graphics library in C. However, this is a misconception. In fact, OpenGL is low-level graphics library specification. Just like J2EE, OpenGL is nothing but a set of platform neutral, language independent and vendor neutral APIs. These APIs are procedural in nature. In simple terms, this means a programmer does not describes the object and appearances instead he/she details the steps through which an effect or an appearance can be achieved. These steps comprises of many OpenGL commands that includes commands to draw graphic primitives such as point, line, polygon etc in the three dimensions. OpenGL also provides commands and procedures to work with lighting, textures, animations etc. One important aspect to keep in mind is that OpenGL is meant for rendering. Hence it does not provide any APIs for working with I/O management, window management etc. that’s where SDL comes into picture. To understand how OpenGL renders, it is important to understand how it interfaces between graphics application and graphics card. So here we go.

The interfacing works at three levels. They are:

1. Generic Implementation

2. Hardware Implementation

3. OpenGL pipeline

While the Generic deals with providing a rendering layer that sits on top of the OS specific rendering system whereas Hardware implementation provides direct hardware interfacing and pipeline works at taking the command and giving it to hardware after processing. Lets look at the details.

1. Generic Implementation:

The other word for Generic Implementation is Software rendering. If the system can display a generated graphics, then technically speaking Generic Implementation can run anywhere. The place occupied by the Generic implementation is between the program and the software rasterizer. Pictorially it would be:

It is clear from the diagram that the Generic implementation takes the help OS specific APIs to draw the generated graphics. For example on Windows it is GDI whereas on *nix systems it is XLib. The generic implementation on Windows is known as WOGL and that on Linux is MESA 3D.

2. Hardware Implementation:

The problem with Generic implementation is that it depends on the OS for rendering and hence the rendering speed and quality differs from OS to OS. This is where Hardware Implementation comes. In this case, the calls to the OpenGL APIs are passed directly to the device driver (typically the AGP card’s driver). The driver directly interfaces the graphics device instead of routing it through OS specific graphics system. Diagrammatically:

The functioning of Hardware Interfacing is totally different from that of Generic Implementation which is evident from the diagram. Since interfacing with the device driver directly enhances both the quality as well as speed of the rendered graphics.

3. OpenGL Pipeline:

In essence, the term pipeline is a process that is the finer steps of a conversion or transformation. In other words a process such as conversion can be broken down into finer steps. These steps together form the pipeline. In a graphics pipeline, each stage or step refines the scene. In case of OpenGL it is vertex data. Whenever an application makes an API call, it is placed at command buffer alongwith commands, texture, vertex data etc. On flushing of this buffer(either programmatically or by driver), the contained data is passed on to the next step where calculation intensive lighting and transformations are applied. Once this is completed the next step creates colored images from the geometric, color and texture data. The created image is placed in the frame buffer which is the memory of the graphic device that is the screen. Pictorially this would be:

Though this a simplified version of the actual process, yet the above detailed process provides an insight into the working of OpenGL. This brings this section to conclusion. However one question still remains- what are the basic steps in creating an OpenGL application. That is what next section is about.

OpenGL- Basic Steps towards Application:

Till now theory of OpenGL was discussed. Now lets see how to put it into use. To draw any shape onto the screen, there are three main steps. They are:

1. Clearing the screen

2. Resetting the view

3. Drawing the scene

Of these the third step consists of multiple sub-steps. Following are the details:

1. Clearing the Screen:

To set the stage for drawing, clearing the screen is a must. This can be done by using the glClear() command. This command clears the screen by setting the values of the bit plane area of the view port. glClear() takes a single argument that is the bitwise OR of several values indicating which buffer is to be cleared. The values of the parameter can be :


It indicates the buffers currently enabled for color writing have to be cleared.


This is used to clear the depth buffer.


If the accumulation buffer has to be cleared use this.


This is passed as parameter when the stencil buffer has to be cleared.

Next the color to be used as the erasing color is specified. This can be done using glClearColor(). This command clears the color buffers specified. That means when the specified color buffers are cleared the screen is recreated accordingly. So to clear the depth buffer and set the clearing color to blue the statements would be:



2. Resetting the View:

The back ground and the required buffers have been cleared. But the actual model of the image is based on the view. View can be considered as the matrix representation of the image. So to draw this matrix has to be set to identity matrix. This is done using glLoadIdentity(). The statement would be:


3. Drawing the Scene:

To draw the scene we to tell OpenGL two things:

a. Start and Stop the drawing:

These commands are issued through the calls to glBegin() and glEnd(). The glBegin() takes one parameter-the type of shape to be drawn. To draw using three points use GL_TRIANGLES, GL_QUADS to use four points and GL_POLYGON to use multiple points. The glEnd() tells OpenGL to stop the drawing. For example, to draw a triangle the statements would be:

glBegin(GL_TRIANGLES);                                                                                                  :



The drawing commands come between these commands.

b. Issue the drawing commands:

In the drawing commands, vertex data is specified. These commands are of the type glVertex*f() where * corresponds to the no. of parameters-2 or 3. Each call creates a point and then connects it with the point created with earlier call. So to create a triangle with the coordinates (0.0, 1.0, 0.0), (-1.0,-1.0, 0.0) and (1.0,-1.0, 0.0) the commands would be:


glVertex3f( 0.0f, 1.0f, 0.0f);                                            

                        glVertex3f(-1.0f,-1.0f, 0.0f);                                                                                                       glVertex3f( 1.0f,-1.0f, 0.0f);


That’s all there about drawing objects with OpenGL. In the next section, these commands would be used to put the SDL based framework to test.

SDL Based framework- Creation & Testing:

Till now I have discussed various APIs of SDL. Now its time to put them together so that working with OpenGL. So here we go.

First the includes:

#include <stdio.h>//  Include the Standard IO Header

#include <stdlib.h>// and the standard lib header

#include <string.h>// and the string lib header

#include <GL/gl.h>// we’re including the opengl header

#include <GL/glu.h>// and the glu header

#include <SDL.h>//and the SDL header


The global variables:

bool isProgramLooping;//we’re using this one to know if the program   

                                       //must go on in the main Loop

SDL_Surface *Screen;


Now the common functionalities- initialization, termination, full-screen toggling.

bool Initialize(void)// Any Application & User Initialization Code Goes Here


            AppStatus.Visible= true;         // At The Beginning, Our App Is Visible

            AppStatus.MouseFocus= true;// And Have Both Mouse

            AppStatus.KeyboardFocus = true;// And Input Focus


            // Start Of User Initialization. These are just examples

            angle    = 0.0f;// Set The Starting Angle To Zero

            cnt1= 0.0f;// Set The Cos(for the X axis) Counter To Zero

            cnt2= 0.0f;// Set The Sin(for the Y axis) Counter To Zero




                        printf(“Cannot load graphic: %s\n”, SDL_GetError() );

                        return false;




            return true;                                                                                                                                          // Return TRUE (Initialization Successful)




void Deinitialize(void)                                                                                                                         // Any User Deinitialization Goes Here


            return;                                                                                                                                                             // We Have Nothing To Deinit Now



void TerminateApplication(void)// Terminate The Application


            static SDL_Event Q;// We’re Sending A SDL_QUIT Event


            Q.type = SDL_QUIT;// To The SDL Event Queue


            if(SDL_PushEvent(&Q) == -1)            // Try Send The Event


            printf(“SDL_QUIT event can’t be pushed: %s\n”, SDL_GetError() );             exit(1);                                                                                                                                                                                    // And Exit



            return; // We’re Always Making Our Funtions Return



void ToggleFullscreen(void)                                                                                                                                                    // Toggle Fullscreen/Windowed (Works On Linux/BeOS Only)


            SDL_Surface *S;                                                                                                                                                                                  // A Surface To Point The Screen


            S = SDL_GetVideoSurface();                                                                                                                                        // Get The Video Surface


            if(!S || (SDL_WM_ToggleFullScreen(S)!=1))                                                                                       // If SDL_GetVideoSurface Failed, Or We Can’t Toggle To Fullscreen


                        printf(“Unable to toggle fullscreen: %s\n”, SDL_GetError() );                                  // We’re Reporting The Error, But We’re Not Exiting



            return;                                                                                                                                                                                                 // Always Return



Next comes the OpenGL parts- Creating an OpenGL window. In other words initializing OpenGL. But initializing needs updating as it is created. Hence the reshape function  :

void ReshapeGL(int width, int height) // reshape the window when it’s moved or resized


            glViewport(0,0,(GLsizei)(width),(GLsizei)(height));                                                              // Reset The Current Viewport

            glMatrixMode(GL_PROJECTION);                                                                                                                                 // select the projection matrix

            glLoadIdentity();                                                                                                                                                                     // reset the projection matrix


            gluPerspective(45.0f,(GLfloat)(width)/(GLfloat)(height),1.0f,100.0f);          // calculate the aspect ratio of the window

            glMatrixMode(GL_MODELVIEW);          // select the modelview matrix

            glLoadIdentity();                                 // reset the modelview matrix




bool CreateWindowGL(int W, int H, int B, Uint32 F)                                                                            // This Code Creates Our OpenGL Window


            SDL_GL_SetAttribute( SDL_GL_RED_SIZE, 5 );                                                                                               // In order to use SDL_OPENGLBLIT we have to

            SDL_GL_SetAttribute( SDL_GL_GREEN_SIZE, 5 );                                                                               // set GL attributes first

            SDL_GL_SetAttribute( SDL_GL_BLUE_SIZE, 5 );

            SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 16 );

            SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );                                                                          // colors and doublebuffering


            if(!(Screen = SDL_SetVideoMode(W, H, B, F)))                                                                                    // We’re Using SDL_SetVideoMode To Create The Window


                        return false;                                                                                                                                                                 // If It Fails, We’re Returning False



            SDL_FillRect(Screen, NULL, SDL_MapRGBA(Screen->format,0,0,0,0));                                                                                                                                                                                                                                                         

            ReshapeGL(SCREEN_W, SCREEN_H);                                                                                                                           // we’re calling reshape as the window is created


            return true;                                                                                                                                                                              // Return TRUE (Initialization Successful)



I will be discussing the APIs used in resize function in the next issue. Next is the draw function. It also contains the test code:

void Draw3D(SDL_Surface *S)            // OpenGL drawing code here


            glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // clear screen and

                                                                      //depth buffer. Screen color has been cleared at init

            glLoadIdentity();                                                                                                                                 // reset the modelview matrix


                        glVertex3f( 0.0f, 1.0f, 0.0f);                                       

                        glVertex3f(-1.0f,-1.0f, 0.0f);                                                                                                  glVertex3f( 1.0f,-1.0f, 0.0f);                                              




            glFlush();                                                                                                                                                         // flush the gl rendering pipelines




Now the main(). It contains the keyboard handling code

int main(int argc, char **argv)          


            SDL_Event E;   // and event used in the polling process

            Uint8 *Keys; // a pointer to an array that will contain the keyboard snapshot

            Uint32  Vflags; // our video flags



            Screen = NULL;

            Keys = NULL;  

            Vflags = SDL_HWSURFACE|SDL_OPENGLBLIT;//a hardware surface and special  

                                                                                          //openglblit mode

                                                //so we can even blit 2d graphics in our opengl scene


            if(SDL_Init(SDL_INIT_VIDEO)<0)// init the sdl library, the video subsystem


              printf(“Unable to open SDL: %s\n”, SDL_GetError() );// if sdl can’t be nitialized




            atexit(SDL_Quit);// sdl’s been init, now we’re making sure thet sdl_quit will be   

                                          //called in case of exit()



            if(!CreateWindowGL(SCREEN_W, SCREEN_H, SCREEN_BPP, Vflags))                                      //  Video Flags Are Set, Creating The Window


                        printf(“Unable to open screen surface: %s\n”, SDL_GetError() );                






            if(!InitGL(Screen))// we’re calling the opengl init function


              printf(“Can’t init GL: %s\n”, SDL_GetError() );

               exit(1);                                                                                                                                 }


            if(!Initialize())                                                                                                                         {

                        printf(“App init failed: %s\n”, SDL_GetError() );                                                                              exit(1);                                                                                                                        }


            isProgramLooping = true;                                                                                                                                          

           while(isProgramLooping)// and while it’s looping




                                    switch(E.type)// and processing it



                                    case SDL_QUIT:// it’s a quit event?


                                                            isProgramLooping = false;                                                                                         

                                                            break;                                                                                                                          }


                                    case SDL_VIDEORESIZE:// It’s a RESIZE Event?


                                                            ReshapeGL(E.resize.w, E.resize.h);                                                                

                                                            break;                                                                                                                                                  // And Break



                                    case SDL_KEYDOWN:// Someone Has Pressed A Key?


                                                            Keys = SDL_GetKeyState(NULL);                                                                                             break;                                                                                                                          }





                         Draw3D(Screen);                                                                                                                                            // Do The Drawings!

                                                SDL_GL_SwapBuffers();                                                                                                                      // and swap the buffers (we’re double-buffering, remember?)






            exit(0);            // And finally we’re out, exit() will call sdl_quit


            return 0;// we’re standard: the main() must return a value



That brings us to the end of this discussion. This time it was a bit lengthy. But the framework that has been developed just will work as the foundation for developing functionalities like lighting, texture mapping, animation and so on. The next topic would be using timers in animating the triangle just drawn. Till next time.

Game Programming using SDL: Working with File I/O API

File Input / Output, also generally known as file I/O, is one of the essential components of any software. Games are no exception. The file I/O can be for loading a background, texture or a simple text indicating level or score. It can also be used for saving player’s current statistics, level details or the custom map of the level. Whatever be the scenario, without a good and optimized file I/O, the game play will not become a rewarding experience for the player. With so many platforms to target, optimization of an API and making it generic to be used on multiple platforms becomes an arduous task. That is where file I/O API of SDL comes into play. The APIs provided by SDL are not platform specific. The platform specific aspects are taken care by SDL under the hood. Hence, developer has to focus only on the logics of the game and not on the ‘logistics’ of file operations. The focus of this discussion will be on the File I/O provided by SDL. The first section will be about the whys and wherefores of the API. In the second section, the steps for using the API will be detailed. The last section will have an example that makes use of the API discussed in the first two sections. That is the outline for this discussion.

SDL File I/O API – the Whys and Wherefores

File I/O API is one of the lesser documented API of SDL. However, the features provided by the API eases many File I/O operations such as loading image from an archived (zip or gzip) files. The main aspect or component of the API that makes such operations to be performed easily is the structure named SDL_RWops. Since SDL_RWops structure forms the basis of file I/O, the file operations as well as the API is also known as RWops. So, in short, the RWops API consists of the following:

1. The SDL_RWops structure

2. The functions that operate upon the structure

The former takes the file handles as well as pointers to memory mapped files. Later provides ways to read from or write to the file handles and memory mapped files. Here are the details.

1. The SDL_RWops structure:

It is akin in functionality to the FILE structure provided by the standard C library. In other words, SDL_RWops is the read write operation structure. All the file I/O functions make use of this structure to keep track of file handlers, current position being accessed etc. To use the API, it is not necessary to know the internals or details of this structure. The main point to keep in mind is all the RWops API needs this structure to work. So, any exceptions encountered during running of an application that makes use of RWops API can be traced back to problems with initialization of this structure. One point to keep in mind is that it is also called as ‘RWops structure’.

2.  The functions that operate upon the structure

Most of the functions provided by the RWops API are similar in functionality to their counterparts found in standard library. The most commonly used functions of RWops are:

a. SDL_RWFromFile

It opens a file, the name of which has been passed as the argument. Apart from the filename, the second argument is the mode in which the file has to be opened. The function returns a pointer to SDL_RWops structure corresponding to the file opened. The following statement opens a file named “tux.bmp” in read mode and returns a pointer to SDL_RWops structure of the “tux.bmp”.

SDL_RWops *file;

file = SDL_RWFromFile(“tux.bmp”, “r”);

b. SDL_RWFromMem

It prepares or allocates memory area for RWops to use. In other words, it sets up the RWops structure based a memory area of a certain size. It takes two arguments – the memory (or pointer to the memory) to be allocated and size of the memory. One of the scenarios where this method comes handy is when one wants to save the current screen as a bitmap. The following example sets up RWops structure based on byte array.

char bitmap[310000];

SDL_RWops *rw;

rw = SDL_RWFromMem(bitmap, sizeof(bitmap));

c. SDL_FreeRW

It frees up the memory allocated to the structure. It takes pointer to the RWops structure as argument.

That brings us to the end of this section. Next section will be about the steps to use the API.

Using RWops API – Step-by-Step:

There are three basic steps to use RWops API. They are

1. Get/initialize the SDL_RWops structure

2. Perform operations on the structure

3. Free the structure

Even though the steps seem similar to that of using standard API, in the case of RWops, the same structure can be used to access memory, stream or a file handle. Here are the details.

1. Get or initialize the SDL_RWops structure

As discussed in the previous section SDL_RWops structure forms the basis of any file I/O operations in SDL. So, the first step is to get RWops structure. There are four ways to get the structure or initialize it. They are

a. Using a filename

In this case, the structure is initialized directly from the file whose name has been provided. To do so, SDL_RWFromFile function needs to be used. The following statement instantiates the structure from “texture.bmp” file

SDL_RWops *file;

file = SDL_RWFromFile(“texture.bmp”, “r”);


In the above statements, the structure is initialized from the filename passed as argument. The second argument is the mode in which the structure is initialized. In this case the mode is “r” i.e. read-only. Hence, the structure can be used only to read from “texture.bmp” file. Following are the acceptable values for mode argument

“r” – Open a file for reading. The file must exist.

“w” -Create an empty file for writing. If a file with the same name already exists its content is erased and the file is treated as a new empty file.

“a” – Append to a file. Writing operations append data at the end of the file. The file is created if it does not exist.

“r+” – Open a file for update both reading and writing. The file must exist.

“w+” – Create an empty file for both reading and writing. If a file with the same name already exists its content is erased and the file is treated as a new empty file.

“a+” – Open a file for reading and appending. All writing operations are performed at the end of the file, protecting the previous content to be overwritten. One can reposition (fseek, rewind) the internal pointer to anywhere in the file for reading, but writing operations will move it back to the end of file. The file is created if it does not exist.

b. From file pointer using SDL_RWFromFP

In this case, a file pointer is used to initialize the RWops structure. The file pointer, in this case, is opened using file I/O of standard library. This function is not present in the latest version of SDL since Windows platform does not support using files opened in an application by the DLLs. And SDL libraries are loaded as DLLs.

c. From a pointer in memory using SDL_RWFromMem

As discussed in first section, SDL_RWFromMem allows one to create RWops structure from memory based on pointer to the memory. On one hand, this comes handy when working with file data placed in memory using other API such as gzip API. On the other hand if one has to write something to a specific memory location, which then can be transferred to file, then also this function can be handy. The following statements depict the second scenario where memory location has to be written to.

char bitmap[310000];

SDL_RWops *rw;

rw = SDL_RWFromMem(bitmap, sizeof(bitmap));

SDL_SaveBMP_RW(screen_bitmap, rw, 0);


where screen_bitmap is pointer to the SDL_Surface containing the current screen data.

d. Allocating and filling it in manually using SDL_AllocRW:

Using SDL_AllocRW, one can get an empty RWops structure, the fields of which, needs to be filled manually. Following statement creates an empty RWops structure

SDL_RWops *c=SDL_AllocRW();

Explaining how to fill the structure is beyond the scope of this discussion.

This brings us to the second step.

2. Perform operations on the structure

Once the RWops structure is initialized, it can be used for any kind of file I/O permitted by SDL. It can be to update the texture of a scene, save the current screen as a bitmap, get the contents of a zip file and update the screen with it. It can be used to save the current map or player statistics. Possibilities are many. For example, the following statements reads a bitmap file and displays it on the screen

SDL_RWops *file;

SDL_Surface *image;


file = SDL_RWFromFile(“myimage.bmp”, “rb”);

image = SDL_LoadBMP_RW(file, 1); // 1 means the file will be automatically closed


3. Free the RWops structure

The last step is to free the structure once its usage is complete. This step is mandatory if the structure is created/initialized using SDL_AllocRW. To free the structure, pass the variable containing RWops structure pointer to SDL_FreeRW method. The following statement frees a RWops structure named rw

SDL_RWops *rw=SDL_AllocRW();





That completes the section on the steps to use RWops API.

3. RWops API – In the real world

In real world, the API is not used as standalone. Most of the time it is used in conjunction with some other API such as zlib that reads archived files (zip, gzip etc.). The example, I am about to discuss, makes use of zlib API to read a archived file. The example will be developed as a method that will

a. Accept the name/full path of the archive

b. Return the RWops structure corresponding to the archive

Let us start with the header file to be included.

#include “SDL.h”

#include <stdio.h>

#include <zlib.h>

The zlib.h header is required for zlib API. Next is the function. It takes archive name and the size of memory to be allocated for the file content as the arguments and returns the RWops structure

SDL_RWops* GetFromArchive( char *archive, int bufferSize)



Next step is to declare variables for the RWops structure and the gzFile. gzFile is the zlib equivalent of standard I/O’s FILE structure. It will also initialize an array of size specified by bufferSize argument.

SDL_RWops* GetFromArchive( char *archive, int bufferSize)


 /* gzFile is the Zlib equivalent of FILE from stdio */

  gzFile file;


 /* This is the RWops structure we’ll be using */

  SDL_RWops *rw;


 Uint8 buffer[bufferSize];


  /* We’ll need to store the actual size of the file when it comes in


  int filesize;


Next, open the archive, fill the buffer with contents of the archive and create RWops structure from the buffer. It will also return the created RWops.

SDL_RWops* GetFromArchive( char *archive, int bufferSize)


/* This is the RWops structure we’ll be using */

SDL_RWops *rw;

/* gzFile is the Zlib equivalent of FILE from stdio */

gzFile file;

Uint8 buffer[bufferSize];

/* We’ll need to store the actual size of the file when it comes in


int filesize;

 filesize = gzread(archive, buffer, 13000);

/* Create RWops from memory – SDL_RWFromMem needs to know where

     the data is, and how big it is (that is the file size was saved)


  rw = SDL_RWFromMem(buffer, filesize);


return rw;


That completes the example. The example assumes knowledge of zlib API. Though RWops provides way to read and write to and from files, neither RWops nor SDL itself provide easy way to manipulate the loaded images. That is where SDL Image library comes into picture. Working with the SDL Image API will be the focus of next discussion. Till then…

Packt is celebrating the publication of its 1000th title

28th September 2012
Packt Publishing reaches 1000 IT titles and celebrates with an open invitation

Birmingham-based IT publisher Packt Publishing is about to publish its 1000th title. Packt books are renowned among developers for being uniquely practical and focused, but you’d be forgiven for not yet being in the know – Packt books cover highly specific tools and technologies which you might not expect to see a high quality book on.

Packt is certain that in its 1000 titles there is at least one book that everyone in IT will find useful right away, and are inviting anyone to choose and download any one of its eBooks for free over its celebration weekend of 28-30th Sep 2012. Packt is also opening its online library for a week for free to give customers an easy to way to research their choice of free eBook.

Packt supports many of the Open Source projects covered by its books through a project royalty donation, which has contributed over $400,000 to Open Source projects up to now. As part of the celebration Packt is allocating $30,000 to share between projects and authors as part of the weekend giveaway, allocated based on the number of copies of each title downloaded.

Dave Maclean, founder of Packt Publishing:

“At Packt we set out 8 years ago to bring practical, up to date and easy to use technical books to the specialist tools and technologies that had been largely overlooked by IT publishers. Today, I am really proud that with our authors and partners we have been able to make useful books available on over 1000 topics and make our contribution to the development community.”

More details can be found at