Understanding POST method: The Create Operation of REST world


Create, Read, Update and Delete or CRUD, for short, forms the fundamental operations/functionality provided by any service. In REST, a POST request is used to add a new entry into the collection represented by a resource. In other words, to create a new item, REST makes use of POST HTTP method. In this post, the focus will be on what POST is.

To understand POST, it is essential to understand what it expects from a request, and what kind of response it can provide including the headers. Hence, in this post we will be discussing in detail about what entails a POST request and response. The next post in this series will focus on how to implement create functionality for a REST service using RESTEasy and POST method.

What is POST?

POST is one of the most commonly used methods of HTTP. Whenever you click submit button on a registration form or an application form, you are sending a POST request to the server. In the world of REST, POST is essentially used to create a new entry within the collection represented by a resource. As with any HTTP method, to understand how it works, we have to understand what a POST request and response contain and what they mean.

Request

According to the specs, “The POST method is used to request that the origin server accept the entity enclosed in the request as a new subordinate of the resource identified by the Request-URI in the Request-Line”. The important part of the definition is “accept the entity enclosed in the request as new subordinate”. It tells us the following:

  • The request will include an entity. This is important point for validation and validation related response.
  • The server must accept the enclosed or included entity as an item that can be added to the collection of the resource identified by the URI. This is important because it means server cannot process it as a part of a different resource.

The aforementioned points brings out the query – how will the server know the format (or MIME type) of the entity? To answer this we have to look at the parts of a POST request. They are header and body.

Header

It is the information sent to the server detailing what the client wants and what it will accept as response. Basically, it contains the following:

  • Method – the HTTP method to be executed by the server. In case of POST request, it, obviously, is POST.
  • URL/URI – the identifying URI of the resource.
  • Version – version of the HTTP, either 1.0 or 1.1
  • Content-Type – MIME type/format of the content. For example, if the format of the entity is XML, Content-Type will be either text/xml or application/xml.
  • Content-Length – the length/size of the enclosed entity or content.

An example of a POST request header with http://127.0.0.1:3000/book as URI and application/xml as Content-Type will be

POST /book HTTP/1.1
Host: 127.0.0.1
Content-Type: application/xml
Content-Length: 19

Body

The entity (or data) to be processed is send as the part of the body of a POST request. The body contains data of the format specified in the header. One point to keep in mind is that if the format of the data in body does not match the format specified in the header, server can reject the request. For example, if the header specifies type as application/xml but body contains XML, server can reject the request.

Following is an example of a POST request with header and body:

POST /book HTTP/1.1
Host: 127.0.0.1
Content-Type: application/xml
Content-Length: 100

<?xml version=”1.0”>

<book>

<title> The invisible man</title>

</book>

A REST message is consists of a request and corresponding response. We have discussed POST request. Next, let us look at POST response.

Response

The POST response tells the client whether the request has been successful or not and optionally the entity that was created or the reason for failure of the request. The main points to keep in mind about the POST response are:

  • If an entity has been created as a result of the processing of the request, the server should return the entity as part of the response body. The header should contain the URI specifying the location of the entity.
  • The response is not cacheable, by default unless the cache controls are present.

Similar to the request, response also contains header and a body. However, in case of response, body is optional.

Response Header

The response header provides details of the result of request processed. The most commonly used headers are:

  • Cache-Control

It defines the cache control parameters such as the maximum amount of time for which the response can be cached, server from where to retrieve the cache etc.

  • Content-Type

This header tells the client what is the format or MIME type of the content in the body.

  • Content-Length

It is through this header that the client understands the size of the content within the body.

  • Location

The client uses this header to read the URI of the newly created entity.

  • Status

It tells the status of the request. If the request has been processed successfully, the status will be 201 meaning created.

Among the above, Status and Location are most important since they tell us whether the request is successful or not. And if the request has been processed successfully, what is the URI of the newly created entity.

An example of response with status as created and with http://localhost:3000/book/1 as location will be

HTTP/1.1 201 CREATED

Location: http://localhost:3000/book/1

Response Body

The body part is optional for POST. It will be present only if a new entity is created. Else the body will be empty. The format of the body, if present, should be same as the one requested by the client through the Accept header of the request. That is, if Accept header of request is application/json then the Content-Type of the response header must be application/json and the body must have the data/entity in JSON format.

That brings us to the end of this post. In the next one, I will detail what steps are required to leverage RESTEasy for creating a REST service, how to setup RESTEasy based project with maven and how to deploy as well as test the implemented functionality. Till then…

A Brief History of Astronomy


The following has been translated from the book "Key to Nature" 
published by Kerala Sasthra Sahitya Parishad.
This is the second draft.

Astronomy is as old as civilization. The first humans who looked at the vast sky and the twinkling stars in amazement were the progenitors of Astronomy.

Ancient civilizations followed the Geocentric Theory of Universe. The  ancient Greek civilization created a model of the Universe by calculating the motion of heavenly bodies. The model of the universe proposed by Claudius Ptolemy was the result of the meticulous work done by the Greeks.

The ancient Babylonians were the first catalogers of the stars. By around 1600 B.C., they had more or less recorded the motion of planets. By 800 B.C, they were able to record movement of planets with respect to distant stars. The Babylonians were able to, by and large, understand the recurring characteristic of planetary motion. Astronomy was controlled solely by the Babylonian priests. They were uninterested in finding explanations for their observations.

It was the Greeks who tried to create a model of universe that was based on geometry. Pythagoras (582 B.C. – 497 B.C.), a Greek philosopher, tried to explain the universe in mathematical terms.

The Homocentric model of planetary motion put forth by Eudoxus (408 B.C. – 355 B.C), who was a student of Plato, needs special mention. The model by Eudoxus tried to explain the periodic behavior in the motion of heavenly bodies by proposing that they moved uniformly along concentric circular paths.

Aristarchus theorized that it is the sun and not the earth which is the center of the universe. He debated that the stars and planets rise and set due to the motion of the earth around the sun and not the other way round.

Ptolemy’s period is considered the pinnacle of ancient Greek astronomy. It was the early period of Second Century B.C. Almagest was Ptolemy’s most famous creation. The model of the universe that he presented in the Almagest was relevant till the time of Copernicus. The time that passed between Ptolemy’s presentation of his model and the model put forth by Copernicus was around 1400 years.

However, the model of the universe that ancient Indians had conceptualized was vaster than any of the aforementioned models. In the Pranava-Vada, authored by sage Gargyayana*, the size of the universe theorized is comparable to the modern understanding of our universe.

The foundation of modern astronomy was laid in the era of Copernicus. His observations contradicted the postulates of Ptolemy’s model. He presented the Heliocentric model of the universe in 1529. The rotation of planets and their revolution around the sun were parts of this theory. He had postulated that the path of planetary motion was circular. He theorized that the rotation of earth caused day and night as well as the nightly occurrence of heavenly bodies; revolution around the sun was the cause of yearly motion of the sun; that certain planets had a different revolutionary path around the sun was the cause of their retrograde motion.

Tycho Brahe, who is considered to be the father of observational astronomy, did not consider the Heliocentric model of Copernicus to be correct. However, Kepler, who was the assistant of Brahe, concurred with the Copernican model. He invented the famous laws related to the planetary motions. Galileo Galilee, who was a contemporary of Kepler, used the telescope to study the paths of heavenly bodies. He studied the surface of the moon. He found that Venus waxes and wanes. He discovered the first four moons of Jupiter. He supported the Heliocentric model of Copernicus based on his observations. After Galileo, Sir Issac Newton discovered that it is gravity that controls the paths of planetary motion. He explained the properties of planetary motion using mathematics. He also proved Kepler’s laws of planetary motion using mathematics.

The foundation and advance of modern astronomy is due to the invaluable contributions of brilliant scientists such as Copernicus, Kepler, Galileo, Newton and so on. Along with the contributions of the aforementioned scientists, the invention of telescopes and clocks laid a strong foundation for the progress of modern astronomy. The advent of telescopes helped scientists look more and more into the depths of space. The telescope invented by Galileo was based on refraction of light. In 1670, Newton invented a better type of telescope that worked on the principle of reflection of light. As science progressed, various other telescopes were invented that worked based on all the components of electromagnetic spectrum such as infrared, X-rays and so on and so forth.

A type of telescope that could be used to study the radio waves emitted by heavenly bodies was the brain child of Karl Jansky of Bell Telephone Laboratories. Radio telescopes gave birth to a new branch of astronomy known as Radio astronomy. Advances in the understanding of outer space led to the advent of telescopes that could see into the farthest reaches of the sky. This resulted in advanced research into fields based on other radiations including X-ray, Gamma ray and Ultraviolet rays. The end of the 17th century saw the invention of pendulum clocks, which brought forth better clocks that were able to measure time periodicity more accurately. The discovery of laser and maser were the reasons for such accuracy in measurements.

Doppler Effect states that the relative motion between the source and the listener is the reason for the change in frequency of the wave emitted from the source. Vesto Melvin, an American scientist, applied the Doppler Effect on light traveling from galaxies to explain their shift in the spectral lines. It paved the path for a new mechanism to measure the distance of faraway galaxies. This resulted in the discovery by Hubble that galaxies are moving away from each other at a constant speed. Hubble’s discovery coupled with Doppler Effect helped in measuring the distance between galaxies accurately.

Edwin Hubble’s discovery that the universe is expanding became a milestone for astronomy. In reality, the theoretical context for such a discovery was already in place. Einstein’s theory of Relativity predicts an expanding universe. However, Einstein himself did not believe the prediction. That’s why he introduced the Cosmological Constant. At the same time Alexander Friedman and Georges Lemaitre went on to study the evolution of the expanding universe. The theories put forth by Einstein and Max Planck on the properties of black body radiation proved to be important in understanding the genesis and evolution of an expanding universe. Black body radiation refers to a system or an object that absorbs the entire radiation incident on it and then re-radiates it without losing any radiation in the process. The radiation that escaped during Big Bang follows the model of black body radiation.

The black body radiation theory was empirically developed by George Gamow. George Gamow, Ralph Alpher and Robert Herman studied the problem of how elements were formed in the nascent universe. Fred Hoyle and William Fowler worked on the nuclear synthesis within stars. According to Gamow, all the elements were synthesized during Big Bang. However, according to the calculations of Hoyle and Fowler, the heavy elements are synthesized within the cores of stars and during supernova explosions. Based on this, the Steady State Theorem was put forth by Boyle, Thomas Gold and Hermann Bondi. According to it, the universe has no beginnings or ends. It was always in the same state as we see it today. Nucleosynthesis of elements was proved to be correct. This was the only area that Big Bang Theory had to improve upon. Discovery of background radiation boosted the acceptance of Big Bang Theory. In the near past, proofs supporting the Inflationary theory have been discovered by Microwave anisotropic probe. Astronomy has truly become a branch of science.

Many exotic objects Neutron stars, binary star systems, pulsars, quasars, black holes have been discovered. Astronomy, which once depended entirely on the visible spectrum of light, now makes use of the entire spectrum of electromagnetic radiations such as gamma ray, X-ray and so on. The physicists who study elementary particles are providing significant information to astronomers. And this transfer of information is happening both ways. Today, astronomy and hence astrophysics is one of the most flourishing branches of science.

*https://en.wikipedia.org/wiki/Pranava-Vada_of_Gargyayana

Introducing REST and RESTEasy


Reusability and distribution of logic to self-contained units have been the driving force behind the development of technologies such as DCOM and CORBA. However, complexity and platform dependency have marred these from being standard technologies for creating and consuming distributed and reusable systems. That’s why concept of Web Service came into prominence. Since, it is based on XML and HTTP, Web Services soon became most used standard for implementing distributed systems that cut across languages, platforms and technologies. There are two types of Web Services – SOAP based and REST based. This series of posts will focus on developing Web Services based on REST using Java.

REST is short for REpresentational State Transfer. This post concentrates on the whys and wherefores of REST. First section would focus on the basics of REST. The second section would be about difference between SOAP based Web Service and REST based Web Service. The last section would introduce Resteasy that can be used to develop REST based Web Services.

REST – the whys and wherefores

REST, short for REpresentational State Transfer, is an architectural principal using which we can develop stateless web services that can run over HTTP and clients developed in different languages can access them just as they access any other web page. In REST, every object accessible via HTTP is a resource and each of these resources can be accessed via a Uniform Resource Identifier or URI.

If you consider the full form of REST, there are two main parts – representational and state transfer. The former relates to the resource itself and the latter relates to the client. Let us say, for the sake of example, there is a library management service that can be accessed using the following URI http://library.contessa.com/rest. It contains a resource 1695 that provides details of a particular book. Clients can access it using the URI http://library.contessa.com/rest/1695. The response is in the form of an HTML page. The HTML page is a representation of the resource 1695 which is provided by the service. The resource can have many representations – HTML, XML, JSON etc. Once client receives the response, which in our example is HTML, places the client in one state.

Next, let’s say the HTML contains a link to another resource – the details of the author. If the client traverses to the author resource using the link, client is placed in a different state. So, in short, whenever the client traverses to different resources or different representation of same resource its state is transferred from one representation to another. Hence, the term REpresentational State Transfer is used for such services that are centred on resources, representations and state changes.

Now that we have the basic concept behind REST, let us look at the most common terminologies used in REST, which are:

  1. Resource: Anything that can be accessed using URI is termed as a Resource. Scripts that return records from database, images, web consumable slides etc. can be an example of a resource.
  2. Representations: Representation means in what formats a client can request a resource. If a Resource can be represented as XML and HTML both, then XML and HTML are its representations.
  3. Methods: The way by which client communicates with the server to perform certain operations is define by Methods supported by the resource. Since, REST uses HTTP as the protocol for communication, so a resource can support all or subset of the methods provided by HTTP. For example, in HTTP you can use GET, POST, PUT, OPTIONS, DELETE and HEAD methods. It is up to the resource whether or not to support all of the aforementioned methods.
  4. Messages: Each request and response is a message. One important point to keep in mind is that the messages should be self-contained. For example, a response containing the details of a resource with id 6743 should contain everything related to that resource. Client need not to wait for a second response to have complete data about the same resource.
  5. State and session: The currently sent representation is the State of the resource. If the client needs to track what was the state before the current one, it will need to take implement session at its end. In REST, server is only concerned with the state of the resource and not of the client.

So, if we want to define characteristic features of REST on the basis of what we have discussed so far, the features will be:

  1. It is an architectural style and not a framework or toolkit.
  2. It is not a standard. However, it uses standards for communication (HTTP), representations (XML, JSON) etc.
  3. It makes use of pull based client-server interaction style. Client requests (pulls) a representation of a resource from the server. Server does not send the representation unless a client asks it for the representation.
  4. It is stateless. That means server does not keep track of the requests it receives. It is client’s responsibility to provide all the required information in the request.
  5. The responses are cacheable. The server must mark each response as either cacheable or non-cacheable so that client can take advantage of caching mechanisms to improve performance.
  6. The interfaces for a resource in REST must be generic. In other words, two resources must be accessible to client using the same methods (GET, POST, PUT, DELETE etc.).
  7. All the resources must be named using URI.
  8. The resources can be interconnected using URI through which the client can move from particular representation of one resource to another representation of a different resource.
  9. It supports layered components such as proxies, gateways etc. so as to implement security, increase efficiency etc.

The next natural question that you will be having is how REST differs from SOAP based services. We will be tackling that in next section.

REST and SOAP based services – the differences

There is vast amount of difference between SOAP based services and REST. The major differences can be described using the following points:

  1. Transport Protocol
  2. Based on RPC
  3. Standards
  4. Persistence of state
  5. Uniform resource

The main points that make REST and SOAP different are the second and fourth points. Following are the details.

  1. Transport Protocol: REST is dependent on one transport protocol which is HTTP. SOAP based services can be used with variety of transport protocols.
  1. Based on RPC: REST is not based on RPC. Due to this REST can make use of generic interfaces. SOAP itself is RPC based. So each service will have its own methods and interfaces. This results in making the interfaces generic enough so that toolkits and clients around the services a tough undertaking.
  1. Standards: REST uses the existing web standards. It does not have its own standards. This helps in creating services easier as a different set library and toolkit is not required. SOAP based services has their own standards including but WSDL, SOAP, UDDI etc. So to create a service using SOAP you will require a minimum set of libraries to parse WSDL, understand SOAP and register itself with UDDI.
  1. Persistence of state: REST is stateless. Server does not keep track of change in state. So no handling will be available at server side. SOAP based services can handle sessions.
  1. Uniform resource: In REST access to each resource and methods to access the operations supported by it must be a uniform across all resources. For example, each resource is addressable using a URI. There is no intermediary that maps the resource to URI. In SOAP based services, uniformity of access to the resources and methods to the operations supported by the resource can vary from resource to resource.

The points described above are not to tell you which of them are better, rather to bring out the differences between the two types of web services that are currently most common. With that let us move to next section which will introduce you to RESTEasy.

RESTEasy – What is it?

In Java, all the frameworks that provide functionality to implement REST based applications are implementations of JAX-RS specification. RESTEasy is no exception. It is a portable implementation of JAX-RS. There are two versions of this specification – 1.1 and 2.0. All the RESTEasy versions prior to 3.x implemented JAX-RS 1.1. From 3.0 onwards RESTEasy has implemented JAX-RS 2.0.

The main features provided by RESTEasy are:

Portability:

It can be used in any application server that runs on JDK 6 or higher. For example, RESTEasy based application/web service built for JBoss AS can be deployed in Glassfish.

Client framework:

RESTEasy has client framework. It leverages JAX-RS annotations using that a developer can write HTTP clients easily. One thing to keep in mind is that JAX-RS only defines annotations for server implementation only.

Client cache:

It supports caching semantics that includes cache revalidation. This ‘client cache’ is browser like cache that can be used by applications making use of RESTEasy client framework.

Server cache:

RESTEasy provides server side cache that is in-memory cache and caches the generated responses. It is local response cache since it sits in front of the REST service. And due to this RESTEasy can automatically handle ETag generation and cache revalidation.

Providers for common media types:

The most common media types used for data transfer in REST services are: XML, JSON, YAML, Multipart and Atom. RESTEasy has providers that marshall to and unmarshall from these media types.

 Interceptor model:

Interceptors provide way to process requests and response before the request is passed to the business method or response is returned to the client. RESTEasy provides interceptors that can be used to either work on the body of request and response or on the request and response themselves.

 In the coming chapters I will be focusing on how to use the above mentioned features and when to use them. Till then…

Arjun, Without a Doubt – Review


Story – the two minute version

The book starts with the POV of Arjuna in the forest where the Pandavas and Kunti are living incognito after escaping from the house of lac. Then the scene shifts to Panchal where the kingdom is preparing for Swayamvar of Draupadi. Here, the readers are introduced to Krishna and Draupadi’s through her own words. From there on, each event in Mahabharata is portrayed from the POV of either Arjuna or Draupadi.

What I liked

  1. The relationship of Arjuna and Krishna. In any book related to Mahabharata, the first thing I look for is the portrayal of Parth-Madhav. The author has done full justification in this regard. I would dare say that this is first time an Indian author has depicted their friendship as it is in the epic.

  2. Arjuna’s dedication to archery. Dr. Shinde beautifully depicts the sweat and blood Arjuna had shed to become the peerless archer that he was. Majority of authors forget this aspect of Arjuna.

  3. Lord Indra’s pride in the victories of Arjuna and his love for Arjuna.

  4. Karna’s one sided rivalry with Arjuna. In the epic, Arjuna had only one rival – he himself. However, with the emergence of Karna as a tragic hero in Indian literature, this aspect of Arjuna is selectively forgotten. That is not the case with Arjuna, Without a Doubt. There is a long conversation between the three Krishnas that makes it clear that Arjuna neither considered Karna as a rival nor was intimidated by him ever.

  5. The unique relationship between Arjuna and Draupadi. In the epic, most of their conversation happens through eyes, smiles and sarcastic banter. In this book, these translate to explicit conversation. And that makes understanding their relationship much easier.

What could have been better

  1. Arjuna’s thoughts about Khandav-dahana. This is not first time that an author has shown Arjuna as traumatized for his role in burning of Khandava forest. However, nowhere in the epic, this has been mentioned.

  2. Facts about Gandiva, specifically the fact that nobody except Krishna and Arjuna could lift it. Draupadi could never have taken it to her room.

  3. Reason for Arjuna’s silence during dice hall incidence. I wont spoil it for anyone who has not yet read the book. However, I feel a better reason could have been found in his inner struggle between cold logic and emotions.

  4. Portrayal of Subhadra. In the epic, the only person for whom Arjuna openly declares his love is Subhadra. That could have been taken into consideration.

  5. Darker shades in the portrayal of other Pandavas and Kunti. I am not going into details as that could spoil many of the twists in this tale. The portrayal could have been more balanced.

Overall Rating

4 out of 5. A must read for any fan of Arjuna and Arjun-Draupadi fan.

SDL Programming in Linux: Getting Started with OpenGL


SDL is the foundation on which a game could be built without much ado. However, SDL is not complete in itself. It just provides services using, which, interaction between various components of a game/simulation as well as the games interaction with OS becomes seamless. If there are no components to utilize these services, then these services become just proof of concept. In a gaming engine, most of the time, these services are required by the rendering and AI components. From this part onwards I would be concentrating on the rendering component and its interaction with SDL. I will be covering the AI component in the future. Though SDL supports other graphics libraries, its usage with OpenGL is more common. The reason is that SDL and OpenGL fits like parts of a puzzle. So most of the time, the rendering component, or the rendering sub-system (I would be using this term from now onwards) of a gaming engine is built upon OpenGL. Hence understanding OpenGL is a must to build a good rendering sub-system. This part and the articles coming in the near future would be detailing the different aspects of OpenGL along with how SDL helps in creating a good framework for future purposes. In this part I would be providing whys and wherefores of OpenGL. The first section would detail about the whys and wherefores, second section would detail the steps in creating a basic application whereas in the second section I would be creating a framework using SDL that can be used in the future. In the same section, I would also use simple OpenGL routines to test the framework. That is the agenda for this discussion.

OpenGL- What is it:

 

If this question is asked, then the most common answer one would get is that OpenGL is graphics library in C. However, this is a misconception. In fact, OpenGL is low-level graphics library specification. Just like J2EE, OpenGL is nothing but a set of platform neutral, language independent and vendor neutral APIs. These APIs are procedural in nature. In simple terms, this means a programmer does not describes the object and appearances instead he/she details the steps through which an effect or an appearance can be achieved. These steps comprises of many OpenGL commands that includes commands to draw graphic primitives such as point, line, polygon etc in the three dimensions. OpenGL also provides commands and procedures to work with lighting, textures, animations etc. One important aspect to keep in mind is that OpenGL is meant for rendering. Hence it does not provide any APIs for working with I/O management, window management etc. that’s where SDL comes into picture. To understand how OpenGL renders, it is important to understand how it interfaces between graphics application and graphics card. So here we go.

The interfacing works at three levels. They are:

1. Generic Implementation

2. Hardware Implementation

3. OpenGL pipeline

While the Generic deals with providing a rendering layer that sits on top of the OS specific rendering system whereas Hardware implementation provides direct hardware interfacing and pipeline works at taking the command and giving it to hardware after processing. Lets look at the details.

1. Generic Implementation:

The other word for Generic Implementation is Software rendering. If the system can display a generated graphics, then technically speaking Generic Implementation can run anywhere. The place occupied by the Generic implementation is between the program and the software rasterizer. Pictorially it would be:

It is clear from the diagram that the Generic implementation takes the help OS specific APIs to draw the generated graphics. For example on Windows it is GDI whereas on *nix systems it is XLib. The generic implementation on Windows is known as WOGL and that on Linux is MESA 3D.

2. Hardware Implementation:

The problem with Generic implementation is that it depends on the OS for rendering and hence the rendering speed and quality differs from OS to OS. This is where Hardware Implementation comes. In this case, the calls to the OpenGL APIs are passed directly to the device driver (typically the AGP card’s driver). The driver directly interfaces the graphics device instead of routing it through OS specific graphics system. Diagrammatically:

The functioning of Hardware Interfacing is totally different from that of Generic Implementation which is evident from the diagram. Since interfacing with the device driver directly enhances both the quality as well as speed of the rendered graphics.

3. OpenGL Pipeline:

In essence, the term pipeline is a process that is the finer steps of a conversion or transformation. In other words a process such as conversion can be broken down into finer steps. These steps together form the pipeline. In a graphics pipeline, each stage or step refines the scene. In case of OpenGL it is vertex data. Whenever an application makes an API call, it is placed at command buffer alongwith commands, texture, vertex data etc. On flushing of this buffer(either programmatically or by driver), the contained data is passed on to the next step where calculation intensive lighting and transformations are applied. Once this is completed the next step creates colored images from the geometric, color and texture data. The created image is placed in the frame buffer which is the memory of the graphic device that is the screen. Pictorially this would be:

Though this a simplified version of the actual process, yet the above detailed process provides an insight into the working of OpenGL. This brings this section to conclusion. However one question still remains- what are the basic steps in creating an OpenGL application. That is what next section is about.

OpenGL- Basic Steps towards Application:

Till now theory of OpenGL was discussed. Now lets see how to put it into use. To draw any shape onto the screen, there are three main steps. They are:

1. Clearing the screen

2. Resetting the view

3. Drawing the scene

Of these the third step consists of multiple sub-steps. Following are the details:

1. Clearing the Screen:

To set the stage for drawing, clearing the screen is a must. This can be done by using the glClear() command. This command clears the screen by setting the values of the bit plane area of the view port. glClear() takes a single argument that is the bitwise OR of several values indicating which buffer is to be cleared. The values of the parameter can be :

a. GL_COLOR_BUFFER_BIT

It indicates the buffers currently enabled for color writing have to be cleared.

b. GL_DEPTH_BUFFER_BIT

This is used to clear the depth buffer.

c. GL_ACCUM_BUFFER_BIT

If the accumulation buffer has to be cleared use this.

d. GL_STENCIL_BUFFER_BIT

This is passed as parameter when the stencil buffer has to be cleared.

Next the color to be used as the erasing color is specified. This can be done using glClearColor(). This command clears the color buffers specified. That means when the specified color buffers are cleared the screen is recreated accordingly. So to clear the depth buffer and set the clearing color to blue the statements would be:

glClear(GL_DEPTH_BUFFER_BIT);

glClearColor(0.0f,0.0f,1.0f,0.0f);

2. Resetting the View:

The back ground and the required buffers have been cleared. But the actual model of the image is based on the view. View can be considered as the matrix representation of the image. So to draw this matrix has to be set to identity matrix. This is done using glLoadIdentity(). The statement would be:

glLoadIdentity();

3. Drawing the Scene:

To draw the scene we to tell OpenGL two things:

a. Start and Stop the drawing:

These commands are issued through the calls to glBegin() and glEnd(). The glBegin() takes one parameter-the type of shape to be drawn. To draw using three points use GL_TRIANGLES, GL_QUADS to use four points and GL_POLYGON to use multiple points. The glEnd() tells OpenGL to stop the drawing. For example, to draw a triangle the statements would be:

glBegin(GL_TRIANGLES);                                                                                                  :

:

glEnd();                    

The drawing commands come between these commands.

b. Issue the drawing commands:

In the drawing commands, vertex data is specified. These commands are of the type glVertex*f() where * corresponds to the no. of parameters-2 or 3. Each call creates a point and then connects it with the point created with earlier call. So to create a triangle with the coordinates (0.0, 1.0, 0.0), (-1.0,-1.0, 0.0) and (1.0,-1.0, 0.0) the commands would be:

glBegin(GL_TRIANGLES);

glVertex3f( 0.0f, 1.0f, 0.0f);                                            

                        glVertex3f(-1.0f,-1.0f, 0.0f);                                                                                                       glVertex3f( 1.0f,-1.0f, 0.0f);

glEnd();

That’s all there about drawing objects with OpenGL. In the next section, these commands would be used to put the SDL based framework to test.

SDL Based framework- Creation & Testing:

Till now I have discussed various APIs of SDL. Now its time to put them together so that working with OpenGL. So here we go.

First the includes:

#include <stdio.h>//  Include the Standard IO Header

#include <stdlib.h>// and the standard lib header

#include <string.h>// and the string lib header

#include <GL/gl.h>// we’re including the opengl header

#include <GL/glu.h>// and the glu header

#include <SDL.h>//and the SDL header

 

The global variables:

bool isProgramLooping;//we’re using this one to know if the program   

                                       //must go on in the main Loop

SDL_Surface *Screen;

 

Now the common functionalities- initialization, termination, full-screen toggling.

bool Initialize(void)// Any Application & User Initialization Code Goes Here

{

            AppStatus.Visible= true;         // At The Beginning, Our App Is Visible

            AppStatus.MouseFocus= true;// And Have Both Mouse

            AppStatus.KeyboardFocus = true;// And Input Focus

 

            // Start Of User Initialization. These are just examples

            angle    = 0.0f;// Set The Starting Angle To Zero

            cnt1= 0.0f;// Set The Cos(for the X axis) Counter To Zero

            cnt2= 0.0f;// Set The Sin(for the Y axis) Counter To Zero

           

         

            {

                        printf(“Cannot load graphic: %s\n”, SDL_GetError() );

                        return false;

            }

 

           

            return true;                                                                                                                                          // Return TRUE (Initialization Successful)

}

 

 

void Deinitialize(void)                                                                                                                         // Any User Deinitialization Goes Here

{

            return;                                                                                                                                                             // We Have Nothing To Deinit Now

}

 

void TerminateApplication(void)// Terminate The Application

{

            static SDL_Event Q;// We’re Sending A SDL_QUIT Event

 

            Q.type = SDL_QUIT;// To The SDL Event Queue

 

            if(SDL_PushEvent(&Q) == -1)            // Try Send The Event

            {

            printf(“SDL_QUIT event can’t be pushed: %s\n”, SDL_GetError() );             exit(1);                                                                                                                                                                                    // And Exit

            }

 

            return; // We’re Always Making Our Funtions Return

}

 

void ToggleFullscreen(void)                                                                                                                                                    // Toggle Fullscreen/Windowed (Works On Linux/BeOS Only)

{

            SDL_Surface *S;                                                                                                                                                                                  // A Surface To Point The Screen

 

            S = SDL_GetVideoSurface();                                                                                                                                        // Get The Video Surface

 

            if(!S || (SDL_WM_ToggleFullScreen(S)!=1))                                                                                       // If SDL_GetVideoSurface Failed, Or We Can’t Toggle To Fullscreen

            {

                        printf(“Unable to toggle fullscreen: %s\n”, SDL_GetError() );                                  // We’re Reporting The Error, But We’re Not Exiting

            }

           

            return;                                                                                                                                                                                                 // Always Return

}

 

Next comes the OpenGL parts- Creating an OpenGL window. In other words initializing OpenGL. But initializing needs updating as it is created. Hence the reshape function  :

void ReshapeGL(int width, int height) // reshape the window when it’s moved or resized

{

            glViewport(0,0,(GLsizei)(width),(GLsizei)(height));                                                              // Reset The Current Viewport

            glMatrixMode(GL_PROJECTION);                                                                                                                                 // select the projection matrix

            glLoadIdentity();                                                                                                                                                                     // reset the projection matrix

 

            gluPerspective(45.0f,(GLfloat)(width)/(GLfloat)(height),1.0f,100.0f);          // calculate the aspect ratio of the window

            glMatrixMode(GL_MODELVIEW);          // select the modelview matrix

            glLoadIdentity();                                 // reset the modelview matrix

            return;                                                

}

 

bool CreateWindowGL(int W, int H, int B, Uint32 F)                                                                            // This Code Creates Our OpenGL Window

{

            SDL_GL_SetAttribute( SDL_GL_RED_SIZE, 5 );                                                                                               // In order to use SDL_OPENGLBLIT we have to

            SDL_GL_SetAttribute( SDL_GL_GREEN_SIZE, 5 );                                                                               // set GL attributes first

            SDL_GL_SetAttribute( SDL_GL_BLUE_SIZE, 5 );

            SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 16 );

            SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );                                                                          // colors and doublebuffering

 

            if(!(Screen = SDL_SetVideoMode(W, H, B, F)))                                                                                    // We’re Using SDL_SetVideoMode To Create The Window

            {

                        return false;                                                                                                                                                                 // If It Fails, We’re Returning False

            }

 

            SDL_FillRect(Screen, NULL, SDL_MapRGBA(Screen->format,0,0,0,0));                                                                                                                                                                                                                                                         

            ReshapeGL(SCREEN_W, SCREEN_H);                                                                                                                           // we’re calling reshape as the window is created

 

            return true;                                                                                                                                                                              // Return TRUE (Initialization Successful)

}

           

I will be discussing the APIs used in resize function in the next issue. Next is the draw function. It also contains the test code:

void Draw3D(SDL_Surface *S)            // OpenGL drawing code here

{

            glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // clear screen and

                                                                      //depth buffer. Screen color has been cleared at init

            glLoadIdentity();                                                                                                                                 // reset the modelview matrix

            glBegin(GL_TRIANGLES);                                                                   

                        glVertex3f( 0.0f, 1.0f, 0.0f);                                       

                        glVertex3f(-1.0f,-1.0f, 0.0f);                                                                                                  glVertex3f( 1.0f,-1.0f, 0.0f);                                              

              glEnd();

 

 

            glFlush();                                                                                                                                                         // flush the gl rendering pipelines

 

            return;

}

Now the main(). It contains the keyboard handling code

int main(int argc, char **argv)          

{

            SDL_Event E;   // and event used in the polling process

            Uint8 *Keys; // a pointer to an array that will contain the keyboard snapshot

            Uint32  Vflags; // our video flags

           

           

            Screen = NULL;

            Keys = NULL;  

            Vflags = SDL_HWSURFACE|SDL_OPENGLBLIT;//a hardware surface and special  

                                                                                          //openglblit mode

                                                //so we can even blit 2d graphics in our opengl scene

 

            if(SDL_Init(SDL_INIT_VIDEO)<0)// init the sdl library, the video subsystem

            {

              printf(“Unable to open SDL: %s\n”, SDL_GetError() );// if sdl can’t be nitialized

              exit(1);

            }

 

            atexit(SDL_Quit);// sdl’s been init, now we’re making sure thet sdl_quit will be   

                                          //called in case of exit()

 

 

            if(!CreateWindowGL(SCREEN_W, SCREEN_H, SCREEN_BPP, Vflags))                                      //  Video Flags Are Set, Creating The Window

            {

                        printf(“Unable to open screen surface: %s\n”, SDL_GetError() );                

                        exit(1);                                                                                               

            }

 

           

 

            if(!InitGL(Screen))// we’re calling the opengl init function

            {

              printf(“Can’t init GL: %s\n”, SDL_GetError() );

               exit(1);                                                                                                                                 }

 

            if(!Initialize())                                                                                                                         {

                        printf(“App init failed: %s\n”, SDL_GetError() );                                                                              exit(1);                                                                                                                        }

 

            isProgramLooping = true;                                                                                                                                          

           while(isProgramLooping)// and while it’s looping

            {

                        if(SDL_PollEvent(&E))

                        {

                                    switch(E.type)// and processing it

                                    {

                                               

                                    case SDL_QUIT:// it’s a quit event?

                                                {

                                                            isProgramLooping = false;                                                                                         

                                                            break;                                                                                                                          }

 

                                    case SDL_VIDEORESIZE:// It’s a RESIZE Event?

                                                {

                                                            ReshapeGL(E.resize.w, E.resize.h);                                                                

                                                            break;                                                                                                                                                  // And Break

                                                }

 

                                    case SDL_KEYDOWN:// Someone Has Pressed A Key?

                                                {

                                                            Keys = SDL_GetKeyState(NULL);                                                                                             break;                                                                                                                          }

 

                                    }

                        }

                       

                         Draw3D(Screen);                                                                                                                                            // Do The Drawings!

                                                SDL_GL_SwapBuffers();                                                                                                                      // and swap the buffers (we’re double-buffering, remember?)

                                    }

                        }

            }

 

            Deinitialize();  

            exit(0);            // And finally we’re out, exit() will call sdl_quit

 

            return 0;// we’re standard: the main() must return a value

}

 

That brings us to the end of this discussion. This time it was a bit lengthy. But the framework that has been developed just will work as the foundation for developing functionalities like lighting, texture mapping, animation and so on. The next topic would be using timers in animating the triangle just drawn. Till next time.

Game Programming using SDL: Working with File I/O API


File Input / Output, also generally known as file I/O, is one of the essential components of any software. Games are no exception. The file I/O can be for loading a background, texture or a simple text indicating level or score. It can also be used for saving player’s current statistics, level details or the custom map of the level. Whatever be the scenario, without a good and optimized file I/O, the game play will not become a rewarding experience for the player. With so many platforms to target, optimization of an API and making it generic to be used on multiple platforms becomes an arduous task. That is where file I/O API of SDL comes into play. The APIs provided by SDL are not platform specific. The platform specific aspects are taken care by SDL under the hood. Hence, developer has to focus only on the logics of the game and not on the ‘logistics’ of file operations. The focus of this discussion will be on the File I/O provided by SDL. The first section will be about the whys and wherefores of the API. In the second section, the steps for using the API will be detailed. The last section will have an example that makes use of the API discussed in the first two sections. That is the outline for this discussion.

SDL File I/O API – the Whys and Wherefores

File I/O API is one of the lesser documented API of SDL. However, the features provided by the API eases many File I/O operations such as loading image from an archived (zip or gzip) files. The main aspect or component of the API that makes such operations to be performed easily is the structure named SDL_RWops. Since SDL_RWops structure forms the basis of file I/O, the file operations as well as the API is also known as RWops. So, in short, the RWops API consists of the following:

1. The SDL_RWops structure

2. The functions that operate upon the structure

The former takes the file handles as well as pointers to memory mapped files. Later provides ways to read from or write to the file handles and memory mapped files. Here are the details.

1. The SDL_RWops structure:

It is akin in functionality to the FILE structure provided by the standard C library. In other words, SDL_RWops is the read write operation structure. All the file I/O functions make use of this structure to keep track of file handlers, current position being accessed etc. To use the API, it is not necessary to know the internals or details of this structure. The main point to keep in mind is all the RWops API needs this structure to work. So, any exceptions encountered during running of an application that makes use of RWops API can be traced back to problems with initialization of this structure. One point to keep in mind is that it is also called as ‘RWops structure’.

2.  The functions that operate upon the structure

Most of the functions provided by the RWops API are similar in functionality to their counterparts found in standard library. The most commonly used functions of RWops are:

a. SDL_RWFromFile

It opens a file, the name of which has been passed as the argument. Apart from the filename, the second argument is the mode in which the file has to be opened. The function returns a pointer to SDL_RWops structure corresponding to the file opened. The following statement opens a file named “tux.bmp” in read mode and returns a pointer to SDL_RWops structure of the “tux.bmp”.

SDL_RWops *file;

file = SDL_RWFromFile(“tux.bmp”, “r”);

b. SDL_RWFromMem

It prepares or allocates memory area for RWops to use. In other words, it sets up the RWops structure based a memory area of a certain size. It takes two arguments – the memory (or pointer to the memory) to be allocated and size of the memory. One of the scenarios where this method comes handy is when one wants to save the current screen as a bitmap. The following example sets up RWops structure based on byte array.

char bitmap[310000];

SDL_RWops *rw;

rw = SDL_RWFromMem(bitmap, sizeof(bitmap));

c. SDL_FreeRW

It frees up the memory allocated to the structure. It takes pointer to the RWops structure as argument.

That brings us to the end of this section. Next section will be about the steps to use the API.

Using RWops API – Step-by-Step:

There are three basic steps to use RWops API. They are

1. Get/initialize the SDL_RWops structure

2. Perform operations on the structure

3. Free the structure

Even though the steps seem similar to that of using standard API, in the case of RWops, the same structure can be used to access memory, stream or a file handle. Here are the details.

1. Get or initialize the SDL_RWops structure

As discussed in the previous section SDL_RWops structure forms the basis of any file I/O operations in SDL. So, the first step is to get RWops structure. There are four ways to get the structure or initialize it. They are

a. Using a filename

In this case, the structure is initialized directly from the file whose name has been provided. To do so, SDL_RWFromFile function needs to be used. The following statement instantiates the structure from “texture.bmp” file

SDL_RWops *file;

file = SDL_RWFromFile(“texture.bmp”, “r”);

 

In the above statements, the structure is initialized from the filename passed as argument. The second argument is the mode in which the structure is initialized. In this case the mode is “r” i.e. read-only. Hence, the structure can be used only to read from “texture.bmp” file. Following are the acceptable values for mode argument

“r” – Open a file for reading. The file must exist.

“w” -Create an empty file for writing. If a file with the same name already exists its content is erased and the file is treated as a new empty file.

“a” – Append to a file. Writing operations append data at the end of the file. The file is created if it does not exist.

“r+” – Open a file for update both reading and writing. The file must exist.

“w+” – Create an empty file for both reading and writing. If a file with the same name already exists its content is erased and the file is treated as a new empty file.

“a+” – Open a file for reading and appending. All writing operations are performed at the end of the file, protecting the previous content to be overwritten. One can reposition (fseek, rewind) the internal pointer to anywhere in the file for reading, but writing operations will move it back to the end of file. The file is created if it does not exist.

b. From file pointer using SDL_RWFromFP

In this case, a file pointer is used to initialize the RWops structure. The file pointer, in this case, is opened using file I/O of standard library. This function is not present in the latest version of SDL since Windows platform does not support using files opened in an application by the DLLs. And SDL libraries are loaded as DLLs.

c. From a pointer in memory using SDL_RWFromMem

As discussed in first section, SDL_RWFromMem allows one to create RWops structure from memory based on pointer to the memory. On one hand, this comes handy when working with file data placed in memory using other API such as gzip API. On the other hand if one has to write something to a specific memory location, which then can be transferred to file, then also this function can be handy. The following statements depict the second scenario where memory location has to be written to.

char bitmap[310000];

SDL_RWops *rw;

rw = SDL_RWFromMem(bitmap, sizeof(bitmap));

SDL_SaveBMP_RW(screen_bitmap, rw, 0);

 

where screen_bitmap is pointer to the SDL_Surface containing the current screen data.

d. Allocating and filling it in manually using SDL_AllocRW:

Using SDL_AllocRW, one can get an empty RWops structure, the fields of which, needs to be filled manually. Following statement creates an empty RWops structure

SDL_RWops *c=SDL_AllocRW();

Explaining how to fill the structure is beyond the scope of this discussion.

This brings us to the second step.

2. Perform operations on the structure

Once the RWops structure is initialized, it can be used for any kind of file I/O permitted by SDL. It can be to update the texture of a scene, save the current screen as a bitmap, get the contents of a zip file and update the screen with it. It can be used to save the current map or player statistics. Possibilities are many. For example, the following statements reads a bitmap file and displays it on the screen

SDL_RWops *file;

SDL_Surface *image;

 

file = SDL_RWFromFile(“myimage.bmp”, “rb”);

image = SDL_LoadBMP_RW(file, 1); // 1 means the file will be automatically closed

 

3. Free the RWops structure

The last step is to free the structure once its usage is complete. This step is mandatory if the structure is created/initialized using SDL_AllocRW. To free the structure, pass the variable containing RWops structure pointer to SDL_FreeRW method. The following statement frees a RWops structure named rw

SDL_RWops *rw=SDL_AllocRW();

if(rw)

{

            SDL_FreeRW(rw);

}

That completes the section on the steps to use RWops API.

3. RWops API – In the real world

In real world, the API is not used as standalone. Most of the time it is used in conjunction with some other API such as zlib that reads archived files (zip, gzip etc.). The example, I am about to discuss, makes use of zlib API to read a archived file. The example will be developed as a method that will

a. Accept the name/full path of the archive

b. Return the RWops structure corresponding to the archive

Let us start with the header file to be included.

#include “SDL.h”

#include <stdio.h>

#include <zlib.h>

The zlib.h header is required for zlib API. Next is the function. It takes archive name and the size of memory to be allocated for the file content as the arguments and returns the RWops structure

SDL_RWops* GetFromArchive( char *archive, int bufferSize)

{

}

Next step is to declare variables for the RWops structure and the gzFile. gzFile is the zlib equivalent of standard I/O’s FILE structure. It will also initialize an array of size specified by bufferSize argument.

SDL_RWops* GetFromArchive( char *archive, int bufferSize)

{

 /* gzFile is the Zlib equivalent of FILE from stdio */

  gzFile file;

 

 /* This is the RWops structure we’ll be using */

  SDL_RWops *rw;

 

 Uint8 buffer[bufferSize];

 

  /* We’ll need to store the actual size of the file when it comes in

   */

  int filesize;

}

Next, open the archive, fill the buffer with contents of the archive and create RWops structure from the buffer. It will also return the created RWops.

SDL_RWops* GetFromArchive( char *archive, int bufferSize)

{

/* This is the RWops structure we’ll be using */

SDL_RWops *rw;

/* gzFile is the Zlib equivalent of FILE from stdio */

gzFile file;

Uint8 buffer[bufferSize];

/* We’ll need to store the actual size of the file when it comes in

*/

int filesize;

 filesize = gzread(archive, buffer, 13000);

/* Create RWops from memory – SDL_RWFromMem needs to know where

     the data is, and how big it is (that is the file size was saved)

  */

  rw = SDL_RWFromMem(buffer, filesize);

 

return rw;

}

That completes the example. The example assumes knowledge of zlib API. Though RWops provides way to read and write to and from files, neither RWops nor SDL itself provide easy way to manipulate the loaded images. That is where SDL Image library comes into picture. Working with the SDL Image API will be the focus of next discussion. Till then…

Packt is celebrating the publication of its 1000th title


PRESS RELEASE
28th September 2012
Packt Publishing reaches 1000 IT titles and celebrates with an open invitation

Birmingham-based IT publisher Packt Publishing is about to publish its 1000th title. Packt books are renowned among developers for being uniquely practical and focused, but you’d be forgiven for not yet being in the know – Packt books cover highly specific tools and technologies which you might not expect to see a high quality book on.

Packt is certain that in its 1000 titles there is at least one book that everyone in IT will find useful right away, and are inviting anyone to choose and download any one of its eBooks for free over its celebration weekend of 28-30th Sep 2012. Packt is also opening its online library for a week for free to give customers an easy to way to research their choice of free eBook.

Packt supports many of the Open Source projects covered by its books through a project royalty donation, which has contributed over $400,000 to Open Source projects up to now. As part of the celebration Packt is allocating $30,000 to share between projects and authors as part of the weekend giveaway, allocated based on the number of copies of each title downloaded.

Dave Maclean, founder of Packt Publishing:

“At Packt we set out 8 years ago to bring practical, up to date and easy to use technical books to the specialist tools and technologies that had been largely overlooked by IT publishers. Today, I am really proud that with our authors and partners we have been able to make useful books available on over 1000 topics and make our contribution to the development community.”

More details can be found at

http://bit.ly/RXnAMc

SDL Programming in Linux SDL & OpenGL-Brothers in Gaming


In the world of gaming, SDL provides the entire necessary infrastructure. This would have become clear from the past parts of the series. But infrastructure is to a game what skeleton is to a human body. But without muscles, no locomotion is possible. So working with the analogy of body, SDL provides the skeletal structure to build up the game whereas the flesh, blood and skin are provided by 2D and 3D graphics libraries. In the current plethora of 3D libraries, OpenGL stands out on various accounts. The most significant of them is its compatibility with almost all the platforms and graphics cards. This even reflects in the architecture of SDL as SDL can create and use OpenGL contexts on several platforms. Such architecture helps the game programmer to use all the sub-systems of SDL seamlessly in conjunction with OpenGL to provide most effective games and gaming environments. In this article I will discuss how to use SDL and OpenGL together where gaming infrastructure is provided by SDL and animation as well as rendering is being handled by OpenGL. The first section would discuss the steps required to integrate OpenGL with SDL. The second section would utilize the pointers provided in the first section to create an application having animation(basic) using OpenGL. That is the agenda for the current discussion.

Initialization- Bringing OpenGL into Picture:

 

In SDL all the sub-systems are initialized via SDL_Init(). OpenGL, being a part of graphics subsystem, is not directly initialized in this manner. For initializing OpenGL following are the steps:

1. Set OpenGL Attributes

2. Specify use of Double Buffering

3. Set the Video Mode

Of these second one is optional as it is used only when double buffering is a requirement. Lets have a detailed look at all of them.

1. Setting the OpenGL Attributes:

Before initializing the video, it is better to set up the OpenGL. These attributes are passed to the OpenGL via SDL calling the SDL_GL_SetAttribute() function. The parameters are the OpenGL attribute and their values. The most common attributes passed to this function are:

i. SDL_GL_RED_SIZE:

It sets the size of the red component of the frame buffer. The value is in bits. The commonly used value is 5. Similar parameters exist for blue and green components which are SDL_GL_BLUE_SIZE and SDL_GL_GREEN_SIZE respectively. To set green component to a bit value of 4 the code would be:

SDL_GL_SetAttribute( SDL_GL_GREEN_SIZE, 4 );

ii. SDL_GL_BUFFER_SIZE:

To set the size of the frame buffer this attribute is passed with the required Buffer size in bits. It is greater than or equal to the combined value i.e. sum of the red, green, blue and alpha components. If the requirement is 24 bit color depth and alpha channel of 32-bits then each color component must be given the size value as 8 and frame buffer must be given size as 32. In code it would be:

SDL_GL_SetAttribute( SDL_GL_RED_SIZE, 8 );

SDL_GL_SetAttribute( SDL_GL_BLUE_SIZE, 8 );

SDL_GL_SetAttribute( SDL_GL_GREEN_SIZE,8 );

SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE, 32 );

iii. SDL_GL_DEPTH_SIZE:

This attribute controls the size of depth buffer or Z buffers. Normally the graphics cards provide 16-bit or 24-bit depth. If the value is set more than what is available, the operation will fail. To make it more clear following is the code:

SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE,16);

iv. SDL_GL_ACCUM_RED_SIZE:

To set the size of red component of accumulation buffer, this attribute is used.

The value is specified in bits. SDL_GL_ACCUM_BLUE_SIZE, SDL_GL_ACCUM_GREEN_SIZE controls the size of blue and green component of accumulation buffer. In code it would be:

SDL_GL_SetAttribute(SDL_GL_ACCUM_RED_SIZE,5);

The question that would be arising in your mind would be that whether these attributes could be set after initializing the video mode. The answer is no. The reason is that these settings have to be initialized before invoking and configuring video mode. Next we have to setup the video mode.

2. Setting up double buffering:

This aspect has also to be covered before setting up the video mode as this     attribute goes as a flag parameter to the SDL_GL_SetAttribute. The attribute is     SDL_GL_DOUBLEBUFFER and the value is either 1 or 0. The point to be kept in     mind is that when working in conjunction with OpenGL, the flag specifying double     buffer must be passed as an attribute to the SDL_GL_SetAttribute function and not to the SDL_Set_VideoMode(). In code this would be:

SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER,1);

would set the double buffering to one state.

3. Setting the Video Mode:

Once the OpenGL attributes are set, setting the video mode is similar to the     setting of video mode as described in last tutorials. For more details have a look     at Part-II of the tutorial. The only difference comes in flags being sent to the     SDL_Set_VideoMode(). Apart from other required flags the SDL_OPENGL would     also be set i.e.

int flags=0;

flags= SDL_OPENGL | SDL_FULLSCREEN;

setting the SDL_OPENGL flag is a must.

That’s all there about the required steps. Now let the OpenGL play.

OpenGL in Action:

The theory is over. Its now time to see some real action. The example application would render a rotating triangle. The includes will contain one more header file.

#include <SDL/SDL.h>

#include <gl/gl.h>

 

gl.h contains function declarations necessary to work with OpenGL.

Next comes the main() and OpenGL attributes.

int main(int argc, char *argv[])

{

  SDL_Event event;

  float theta = 0.0f;

  SDL_GL_SetAttribute( SDL_GL_RED_SIZE, 8 );

  SDL_GL_SetAttribute( SDL_GL_BLUE_SIZE, 8 );

  SDL_GL_SetAttribute( SDL_GL_GREEN_SIZE,8 );       

  SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE, 32 );

 

:

:

}

Next initialize the video and video mode

 

int main(int argc, char *argv[])

{

SDL_Event event;

float theta = 0.0f;

SDL_GL_SetAttribute( SDL_GL_RED_SIZE, 8 );

SDL_GL_SetAttribute( SDL_GL_BLUE_SIZE, 8 );

SDL_GL_SetAttribute( SDL_GL_GREEN_SIZE,8 );

SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE, 32 );

SDL_Init(SDL_INIT_VIDEO);

  SDL_SetVideoMode(600, 300, 0, SDL_OPENGL | SDL_HWSURFACE | 

                               SDL_NOFRAME);

:

:

}

 

The video is initialized to 600×300 resolutions. And the hardware rendering mode is being used. This is done by SDL_HWSURFACE flag. Hence OpenGL would write on the graphic card’s memory instead of mapping it to software memory. After this step, the territory of OpenGL starts.

 

int main(int argc, char *argv[])

{

SDL_Event event;

float theta = 0.0f;

SDL_GL_SetAttribute( SDL_GL_RED_SIZE, 8 );

SDL_GL_SetAttribute( SDL_GL_BLUE_SIZE, 8 );

SDL_GL_SetAttribute( SDL_GL_GREEN_SIZE,8 );

SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE, 32 );

SDL_Init(SDL_INIT_VIDEO);

SDL_SetVideoMode(600, 300, 0, SDL_OPENGL | SDL_HWSURFACE |

SDL_NOFRAME);

glViewport(0, 0, 600, 300);

  glClearColor(0.0f, 0.0f, 0.0f, 0.0f);

  glClearDepth(1.0);

  glDepthFunc(GL_LESS);

  glEnable(GL_DEPTH_TEST);

  glShadeModel(GL_SMOOTH);

  glMatrixMode(GL_PROJECTION);

  glMatrixMode(GL_MODELVIEW);

:

:

}

to start working with OpenGL, the view port is initialized. Then the screen is cleared or rendered with the specified background color. Since the triangle would be rotating in 3D space hence the depth has to be set and depth testing has to be enabled. If smooth shading is not used, then the edges would seem jagged. Hence smooth shading model is used. This completes the setting up of OpenGL parameters after SDL video initialization. Drawing and rotation is taken care by the following code in bold:

int main(int argc, char *argv[])

{

SDL_Event event;

float theta = 0.0f;

SDL_GL_SetAttribute( SDL_GL_RED_SIZE, 8 );

SDL_GL_SetAttribute( SDL_GL_BLUE_SIZE, 8 );

SDL_GL_SetAttribute( SDL_GL_GREEN_SIZE,8 );

SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE, 32 );

SDL_Init(SDL_INIT_VIDEO);

SDL_SetVideoMode(600, 300, 0, SDL_OPENGL | SDL_HWSURFACE |

SDL_NOFRAME);

glViewport(0, 0, 600, 300);

glClearColor(0.0f, 0.0f, 0.0f, 0.0f);

glClearDepth(1.0);

glDepthFunc(GL_LESS);

glEnable(GL_DEPTH_TEST);

glShadeModel(GL_SMOOTH);

glMatrixMode(GL_PROJECTION);

glMatrixMode(GL_MODELVIEW);

int done;

  for(done = 0; !done;)

 {

    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

 

    glLoadIdentity();

    glTranslatef(0.0f,0.0f,0.0f);

    glRotatef(theta, 0.0f, 0.0f, 1.0f);

   :

:

}

}

The focus is brought to the point of origin by translating it. Then rotation function is provided the theta value through which the triangle has to be rotated.

int main(int argc, char *argv[])

{

SDL_Event event;

float theta = 0.0f;

SDL_GL_SetAttribute( SDL_GL_RED_SIZE, 8 );

SDL_GL_SetAttribute( SDL_GL_BLUE_SIZE, 8 );

SDL_GL_SetAttribute( SDL_GL_GREEN_SIZE,8 );

SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE, 32 );

SDL_Init(SDL_INIT_VIDEO);

SDL_SetVideoMode(600, 300, 0, SDL_OPENGL | SDL_HWSURFACE |

SDL_NOFRAME);

glViewport(0, 0, 600, 300);

glClearColor(0.0f, 0.0f, 0.0f, 0.0f);

glClearDepth(1.0);

glDepthFunc(GL_LESS);

glEnable(GL_DEPTH_TEST);

glShadeModel(GL_SMOOTH);

glMatrixMode(GL_PROJECTION);

glMatrixMode(GL_MODELVIEW);

int done;

for(done = 0; !done;)

{

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glLoadIdentity();

glTranslatef(0.0f,0.0f,0.0f);

glRotatef(theta, 0.0f, 0.0f, 1.0f);

glBegin(GL_TRIANGLES);

    glColor3f(1.0f, 0.0f, 0.0f);

    glVertex2f(0.0f, 1.0f);

    glColor3f(0.0f, 1.0f, 0.0f);

    glVertex2f(0.87f, -0.5f);

    glColor3f(0.0f, 0.0f, 1.0f);

    glVertex2f(-0.87f, -0.5f);

    glEnd();

 

    theta += .5f;

    SDL_GL_SwapBuffers();

   :

:

}

}

The triangle is drawn by specifying the vertices. Then the theta value is increased. Next event handling part cometh.

int main(int argc, char *argv[])

{

SDL_Event event;

float theta = 0.0f;

SDL_GL_SetAttribute( SDL_GL_RED_SIZE, 8 );

SDL_GL_SetAttribute( SDL_GL_BLUE_SIZE, 8 );

SDL_GL_SetAttribute( SDL_GL_GREEN_SIZE,8 );

SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE, 32 );

SDL_Init(SDL_INIT_VIDEO);

SDL_SetVideoMode(600, 300, 0, SDL_OPENGL | SDL_HWSURFACE |

SDL_NOFRAME);

glViewport(0, 0, 600, 300);

glClearColor(0.0f, 0.0f, 0.0f, 0.0f);

glClearDepth(1.0);

glDepthFunc(GL_LESS);

glEnable(GL_DEPTH_TEST);

glShadeModel(GL_SMOOTH);

glMatrixMode(GL_PROJECTION);

glMatrixMode(GL_MODELVIEW);

int done;

for(done = 0; !done;)

{

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glLoadIdentity();

glTranslatef(0.0f,0.0f,0.0f);

glRotatef(theta, 0.0f, 0.0f, 1.0f);

 glBegin(GL_TRIANGLES);

glColor3f(1.0f, 0.0f, 0.0f);

glVertex2f(0.0f, 1.0f);

glColor3f(0.0f, 1.0f, 0.0f);

glVertex2f(0.87f, -0.5f);

glColor3f(0.0f, 0.0f, 1.0f);

glVertex2f(-0.87f, -0.5f);

glEnd();

theta += .5f;

SDL_GL_SwapBuffers();

SDL_PollEvent(&event);

    if(event.key.keysym.sym == SDLK_ESCAPE)

      done = 1;

}

}

That’s it. This is how SDL and OpenGL work together. The only piece missing in this puzzle is sound. It will be tackled in the next part which incidentally is the last part of this series. So till next time.

SDL Programming in Linux – Events And Raw Graphics


Graphics and handling user inputs- the combination that creates the symphony called game. A world (of game) where these two are out of phase ends in cacophony. In the last article I discussed about the various parameters that goes into creating a screen and loading bitmapped images on to the screen. That was pretty high-level as the work was done on structures that represent the actual screen and maps or sprites. But there are times when one has to get his/her hand dirty by working directly upon pixels. The creators of SDL had already anticipated this requirement and built the capacities to work at raw graphics level into the core itself. Thus developer is redeemed from understanding system, platform and architecture specific nitty-gritty about manipulating the pixels. The other aspect of gaming that gives sleepless nights to developers is handling the user input, as the handling of input devices changes from system to system. To remove this burden from the minds of the developers, SDL provides object-oriented approach in handling the events. In this article, I would be discussing these two aspects of SDL. In the first section, the discussion would focus on using pixel manipulation functions and their usages and the second section would focus on input handling. So now that the agenda for this article have been laid down lets get started.   Raw Graphics-Writing Directly onto the Display:  Though the SDL Graphics APIs provide pretty high level functionality abstracting off all the low-level details, yet, there are times when abstraction is not required. For this purpose also there are ways. These ways doesn’t exist as a library function but as separate functions that has to be embedded into your program. The functions are freely available. But for the completeness I am including them here. These functions are:

  1. getpixel():

This function is useful if pixel value have to be obtained from a given coordinates represented by X and Y values on the display. It works on a single pixel at a time. The first parameter is the surface from which the value has to be obtained. This is represented by a pointer to the SDL_Surface. The next two integer parameters represent the x and y coordinates from where the pixel value has to be obtained. The return value is an Uint32 representing the value of the pixel. Following is the code:   /*  * Return the pixel value at (x, y)  * NOTE: The surface must be locked before calling this!  */ Uint32 getpixel(SDL_Surface *surface, int x, int y) {     int bpp = surface->format->BytesPerPixel;     /* Here p is the address to the pixel we want to retrieve */     Uint8 *p = (Uint8 *)surface->pixels + y * surface->pitch + x * bpp;       switch(bpp) {     case 1:         return *p;       case 2:         return *(Uint16 *)p;       case 3:         if(SDL_BYTEORDER == SDL_BIG_ENDIAN)             return p[0] << 16 | p[1] << 8 | p[2];         else             return p[0] | p[1] << 8 | p[2] << 16;       case 4:         return *(Uint32 *)p;       default:         return 0;       /* shouldn’t happen, but avoids warnings */     } }   The first thing to be done is to obtain the depth represented by BytesPerPixel. It is done by the first statement: int bpp = surface->format->BytesPerPixel;   Next statement is self explanatory. To get the address of the pixel, the pitch of the of the passed surface is multiplied by the value of Y- coordinate, the depth is multiplied by the X- coordinate and the resulting values are added with pixel data of the surface represented by pixels member of SDL_Surface. This calculation provides the actual address of the pixel. The SDL_Surface could be thought of as multi dimensional array. Hence the value could be accessed as row-major and column-major format. That is done in the second statement: Uint8 *p = (Uint8 *)surface->pixels + y * surface->pitch + x * bpp;  As the value returned by BytesPerPixels ranges from 1-4 according to the bytes needed to represent the pixel, it can be used for returning the values in the corresponding format i.e. 8, 16, 24 or 32. this is achieved by the switch-case block. That’s all about getpixel function.

  1. putpixel():

This is same as getpixel(). Apart from the parameters accepted by getpixel() function, this accepts one parameter extra- the address where the value has to be put. Following is the code for putpixel():   /*  * Set the pixel at (x, y) to the given value  * NOTE: The surface must be locked before calling this!  */ void putpixel(SDL_Surface *surface, int x, int y, Uint32 pixel) {     int bpp = surface->format->BytesPerPixel;     /* Here p is the address to the pixel we want to set */     Uint8 *p = (Uint8 *)surface->pixels + y * surface->pitch + x * bpp;       switch(bpp) {     case 1:         *p = pixel;         break;       case 2:         *(Uint16 *)p = pixel;         break;       case 3:         if(SDL_BYTEORDER == SDL_BIG_ENDIAN) {             p[0] = (pixel >> 16) & 0xff;             p[1] = (pixel >> 8) & 0xff;             p[2] = pixel & 0xff;         } else {             p[0] = pixel & 0xff;             p[1] = (pixel >> 8) & 0xff;             p[2] = (pixel >> 16) & 0xff;         }         break;       case 4:         *(Uint32 *)p = pixel;         break;     } }   The working of putpixel() is almost opposite to that of getpixel(). In case of later, the returned value is pixel value corresponding to the coordinates whereas the later places the pixel value according the coordinates. For this first the BytesPerPixel of the passed SDL_Suface is extracted just like before. Then pixel’s address (or pixel value) is calculated and according to the value returned by BytesPerPixels the value is placed. Since the calculated value is address of the pixel, hence the passed pixel value can be directly assigned and the display will be getting the new value.   Now that both the functions have been explained, lets see how to put one them to use i.e. putpixel(). For this I am defining a method called putyellowpixel() that places a yellow pixel at the center of the screen. It doesn’t accept any parameter nor does it returns any value.   void putyellowpixel() {     int x, y;     Uint32 yellow;       /* Map the color yellow to this display (R=0xff, G=0xFF, B=0x00)        Note:  If the display is palettized, you must set the palette first.     */     yellow = SDL_MapRGB(screen->format, 0xff, 0xff, 0x00);       x = screen->w / 2;     y = screen->h / 2;       /* Lock the screen for direct access to the pixels */     if ( SDL_MUSTLOCK(screen) ) {         if ( SDL_LockSurface(screen) < 0 ) {             fprintf(stderr, “Can’t lock screen: %s\n”, SDL_GetError());             return;         }     }       putpixel(screen, x, y, yellow);       if ( SDL_MUSTLOCK(screen) ) {         SDL_UnlockSurface(screen);     }     /* Update just the part of the display that we’ve changed */     SDL_UpdateRect(screen, x, y, 1, 1);       return;   }   To get the yellow color, the SDL_MapRGB() has to be used. The SDL_PixelFormat is the first parameter. It stores surface format information. Next three parameters correspond to the red, blue and green components of the color. The return value is the actual color corresponding to the passed color components in hexadecimal format as follows: yellow = SDL_MapRGB(screen->format, 0xff, 0xff, 0x00);   Once the color has been retrieved, the next step is to get the required x and y coordinates which is achieved by the following statement: x = screen->w / 2; y = screen->h / 2; then screen surface is locked. If this is not done, then corruption of the SDL_Surface structure could get corrupted causing instability of the game as putpixel works on the address of pixel directly. This is done by:   SDL_MUSTLOCK(screen);   The next step is to call the putpixel. Once putpixel has returned, then unlock the surface and update the surface. That completes placing a pixel directly on to the surface. Next section would focus on even handling with reference to keyboard.   Handling the Key Board- the SDL way:Whatever has been discussed till now completes only one aspect of providing interactivity. Even now the application doesn’t have the ability to handle user gestures provided through different input devices such as keyboard, joy stick etc. So now the focus would be on the input handling. Two of the most common input devices are mouse and keyboard. SDL has wrappers for each of these. In this section I would be discussing about keyboard handling. Before entering the world of keyboard events, it is better to understand the most recurring structures in keyboard handling jargon. They are:

  1. SDLKey:

It is an enumerated type that represents various keys. For example SDLK_a represents lowercase ‘a’, SDLK_DELETE is for ‘delete’ key and so on.

  1. SDLMod:

SDLKey enumeration represents only keys. To represent key modifiers such as Shift and Ctrl, SDLMod enumeration is provided by the SDL. The KMOD_CAPS is one of the enumeration that can be used to find out whether caps key is down or not. Other modifiers also have representations in SDLMod.

  1. SDL_keysym:

It is a structure that contains the information of a key-press. The members of this structure include scan code in hardware dependent format, SDLKey value of the pressed key in sym field, the value of modifier key in mod field and the Unicode representation of the key in Unicode field.

  1. SDL_KeyboardEvent:

From the name itself it is obvious that this structure describes a keyboard event. The first member, type, tells that the event is key release or key press event. The second member gives the same info as the first but uses different values. The last member is a structure itself- the SDL_keysym structure.   Now that the structures have been brought into the picture, the next step is to use these in handling the keyboard events. For this the logic is simple. The SDL_PollEvent is used to read the events. This is placed within the while loop. Then the value of type member of SDL_Event variable, passed as the parameter to SDL_PollEvent, is checked to find the type of event and then event processing can be done. In code it is thus:   SDL_Event event;   .   .   /* Poll for events. SDL_PollEvent() returns 0 when there are no  */   /* more events on the event queue, our while loop will exit when */   /* that occurs.                                                  */   while( SDL_PollEvent( &event ) ){     /* We are only worried about SDL_KEYDOWN and SDL_KEYUP events */     switch( event.type ){       case SDL_KEYDOWN:         printf( “Key press detected\n” );         break;         case SDL_KEYUP:         printf( “Key release detected\n” );         break;         default:         break;     }   }   .   .   If this is used this in the program developed in last article, the exit condition of the program can be controlled. The new version would exit only at key press.     void display_bmp(char *file_name) { SDL_Surface *image;   /* Load the BMP file into a surface */ image = SDL_LoadBMP(file_name); if (image == NULL) { fprintf(stderr, “Couldn’t load %s: %s\n”, file_name, SDL_GetError()); return; }   /* * Palettized screen modes will have a default palette (a standard * 8*8*4 colour cube), but if the image is palettized as well we can * use that palette for a nicer colour matching */ if (image->format->palette && screen->format->palette) { SDL_SetColors(screen, image->format->palette->colors, 0, image->format->palette->ncolors); }   /* Blit onto the screen surface */ if(SDL_BlitSurface(image, NULL, screen, NULL) < 0) fprintf(stderr, “BlitSurface error: %s\n”, SDL_GetError());   SDL_UpdateRect(screen, 0, 0, image->w, image->h);   /* Free the allocated BMP surface */ SDL_FreeSurface(image); }   int main(int argc,char* argv[]) { /*variable to hold the file name of the image to be loaded *In real world error  handling  code would precede this                                                                          */ char* filename=”Tux.bmp”; /*The following code does the initialization for Audio and Video*/ int i_error=SDL_Init(SDL_INIT_VIDEO); /*If initialization is unsuccessful, then quit */ if(i_error==-1) exit(1); atexit(SDL_Quit); /* * Initialize the display in a 640×480 8-bit palettized mode, * requesting a software surface */ screen = SDL_SetVideoMode(640, 480, 8, SDL_SWSURFACE); if ( screen == NULL ) { fprintf(stderr, “Couldn’t set 640x480x8 video mode: %s\n”, SDL_GetError()); exit(1); }   /*Handle the keyboards events here. Catch the SDL_Quit event to exit*/ done = 0;              while (!done)             {                SDL_Event event;                 /* Check for events */         while (SDL_PollEvent (&event))              {             switch (event.type)             {                           case SDL_KEYDOWN:                 break;                           case SDL_QUIT:                             done = 1;                            break;             default:                           break;             }         } /* Now call the function to load the image and copy it to the screen surface*/ load_bmp(filename); }   If you run the above code the window wont be closed until the close button is pressed. Though, this code does nothing much in area of interactivity but it’s a beginning. So as you can see, it is really easy to handle keyboard events using SDL. It totally removes the dependence of developer on Operating System for event handling. Also working at raw graphics level is not that difficult.   This brings us to the end of the third part of SDL programming. The next part would cover using OpenGL with SDL. Also using timers would be covered. Till next time.

Sockets in Python: Into the world of Python Network Programming


“Code less ..Achieve more” is the prime philosophy behind the development of all the Very High Level Languages (or VHLL in short). But less no. of lines should not mean reduced flexibility in terms of choosing an approach in solving a problem. Though many of the VHLL or Scripting Languages as they are popularly known, does not keep in mind the flexibility, yet there area few that have the logic of flexibility and choice as their core. Python is one of them. This fact is evident if one tries to do network programming in Python. The choices are aplenty for the programmer. The choices range from low-level sockets or raw-sockets to a completely extensible and functional web-server. In this tutorial I will be discussing how to use raw sockets to create network oriented applications in Python. The first section will cover the basics of the socket module and by the end of the section a simple echo server will be coded. In the second section the echo server would be enhanced by making it capable of serving multiple clients using the concepts introduced in the first section.

 

Sockets and Ports- Doing it the Python way:

Sockets and ports form the core of any network oriented application. According to the formal definition a socket is “An endpoint of communication to which a name may be bound”. The concept (as well as implementation) comes from the BSD community. The 4.3BSD implementation defines three domains for the sockets:

  1. Unix Domain/ File-system Domain:

The sockets under this domain are used when two or more processes within a system have to communicate with each other. In this domain, the sockets are created within the file system. They are represented as strings that contains local path such as /var/lock/sock or /tmp/sock.

 

  1. Internet Domain:

This domain represents the processes that communicate over the TCP/IP. The sockets created for this domain are represented using a (host, port) tuple. Here host is a fully qualified Internet host name that can be represented using a string or in the dotted decimal format (or IP address).

  1. NS Domain:

This domain is the one used by the processes communicating over Xerox protocol which is now obsolete.

 

Of these only the first two are the most commonly used. Python supports all of these. My discussion would be limited to the Internet Domain. To create an application that uses TCP/IP sockets following are the steps:

 

  1. Creating a socket
  2. Connecting the socket
  3. Binding the socket to an address
  4. Listening and accepting connections
  5. Transferring data/receiving data.

 

But before creating a socket libraries have to be imported. The socket module contains all that is needed to work with sockets. The imports can be done in two ways:

import socket or from socket import *. If the first form is used then to access the methods of socket module, socket.methodname() would have to be used. If the later format is used, then the methods could be called without the fully qualified name. I will be using the second format for clarity of the code and ease. Now lets see the various provisions within socket module for the programmers.

 

  1. Creating a socket:

A socket can be created by making call to the socket() function. The socket() function returns a socket in the domain specified. The parameters to the function are:

 

a. family:

The family parameter specifies in which domain the socket has to be created.  The valid values are AF_UNIX for UNIX domain and AF_INET for internet domain.

b. type:

Type defines the type of the protocol to be used. The type can be                   connection oriented like TCP or connection less like UDP. These are defined by the constants SOCK_STREAM for TCP, SOCK_DGRAM for UDP. Other valid parameters are SOCK RAW, SOCK SEQPACKET and SOCK RDM.

c. protocol:

This generally left for default value. The default is 0.

 

So a socket for Internet domain is created thus:

testsocket=socket(AF_INET,SOCK_STREAM)

 

  1. Connecting the Socket:

Sockets thus created can be used on the server-side or client-side. To use the socket as client-socket it needs to be connected to a host. That can be done using the connect() method of the socket object. The connect() method accepts either the host name as the parameter or a tuple containing host name/address and port number as parameter. For example to connect to a host whose address is 192.168.51.100 and the port number 8080 the statement would be:

 

testsocket.connect((‘192.168.51.100’,8080))

 

  1. Binding the socket to an address:

If the socket has to be used on the server side, then the socket has to be bound to an address and a port, thus naming it. To bind a socket to an address, the bind() method of the socket object has to be used. The valid parameter is a tuple containing the address to which the socket has to be bound and the port at which it has to listen for incoming requests. To use the same socket i.e. testsocket on the server side the statement would be:

 

testsocket.bind((‘192.168.51.100’,8080))

  1. Listening and accepting connections:

Once a socket has been named, then it has to be instructed to listen at the given port for incoming requests. This can be done using the listen() method. The listen accepts a no. representing the maximum queued connection. The argument should be atleast 1. for example the following code sets the max queued connection to 2:

 

testsocket.listen(2)

The next thing to be done is to accept the incoming connection requests. This can be done by the accept() function. This function returns a tuple containing a new socket object representing the client and the address of the client. For example:

clientsock,address= testsocket.accept()

in the above statement clientsock would contains a new socket object and address would contain the address of the client.

 

  1. Transferring data/receiving data:

Data can be transferred using recv() and send() methods of socket object. Socket’s recv() method is used to receive the data send from the server or from the client. The parameters are a buffer size for the data , and flags. The flags parameter is optional. So to receive data the code would be:

buff=1024

testsocket.recv(buff)

 

To send the data, a call to the send method is in order. The parameters are the data to be send and the flags. To elucidate  further:

data=raw_input(‘>>’)

testsocket.send(data)

Now that the steps are clear, lets create a simple echo server. First the imports

from socket import *

Then the constants that defines the host, port, buffer size and the address tuple to be used with bind().

 

from socket import *

HOST = ‘localhost’

PORT = 21567

BUFSIZ = 1024

ADDR = (HOST, PORT)

 

Then create the server side socket and bind it to the host and the port. Then comes the max queue size to 2:

 

from socket import *

HOST = ‘localhost’

PORT = 21567

BUFSIZ = 1024

ADDR = (HOST, PORT)

serversock = socket(AF_INET, SOCK_STREAM)

serversock.bind(ADDR)

serversock.listen(2)

 

Now to make it listen for incoming requests continuously place the accept() method in a while loop. This is not the most preferable mode. The preferable way will be discussed in next section:

 

from socket import *

HOST = ‘localhost’

PORT = 21567

BUFSIZ = 1024

ADDR = (HOST, PORT)

serversock = socket(AF_INET, SOCK_STREAM)

serversock.bind(ADDR)

serversock.listen(2)

 

while 1:

print ‘waiting for connection…’

clientsock, addr = serversock.accept()

print ‘…connected from:’, addr

:

:

Next  receive the data from the client and echo it back. This has to continue till the client doesn’t send the null data or ctrl+c. to achieve this again use a while loop and then close the connection when done.

 

from socket import *

HOST = ‘localhost’

PORT = 21567

BUFSIZ = 1024

ADDR = (HOST, PORT)

serversock = socket(AF_INET, SOCK_STREAM)

serversock.bind(ADDR)

serversock.listen(2)

 

while 1:

print ‘waiting for connection…’

clientsock, addr = serversock.accept()

print ‘…connected from:’, addr

 

while 1:

data = clientsock.recv(BUFSIZ)

if not data: break

clientsock.send(‘echoed’, data)

 

clientsock.close()

serversock.close()

 

That’s all for the server. Now for the client. The only exception is that there is no bind(), accept() and listen().

 

from socket import *

HOST = ‘localhost’

PORT = 21567

BUFSIZ = 1024

ADDR = (HOST, PORT)

tcpCliSock = socket(AF_INET, SOCK_STREAM)

tcpCliSock.connect(ADDR)

while 1:

data = raw_input(‘> ‘)

if not data: break

tcpCliSock.send(data)

data = tcpCliSock.recv(1024)

if not data: break

print data

tcpCliSock.close()

Multi-Threaded Echo Server- Other Approach in Creating a Server:

 

In the above example the uses while loop to service different clients. For elucidations it is ok. But in the real world, it won’t work well. The reason is that more than one client cannot be served simultaneously by just while constructs. To overcome the limitations there are several strategies. One of them is making the server multi-threaded. There are two parts in the creation of a multi threaded server:

 

  1. Create threads for each accepted  connections:

This is the core of the multi-threaded server. For each accepted connection request, a different thread is created and the serving that particular client is carried out by an independent thread. Thus quick response time can be achieved.

 

  1. Create a handler:

It is the handler where the whole processing goes on. In our case transferring a file.

 

To make the echo server some changes have to be made. It starts with the accept part as shown below:

 

from socket import *

from threading import *

HOST = ‘localhost’

PORT = 21567

BUFSIZ = 1024

ADDR = (HOST, PORT)

serversock = socket(AF_INET, SOCK_STREAM)

serversock.bind(ADDR)

serversock.listen(2)

while 1:

print ‘waiting for connection…’

clientsock, addr = serversock.accept()

print ‘…connected from:’, addr

thread.start_new_thread(handler, (clientsock, addr))

 

 

 

serversock.close()

 

After accepting the request a new thread is created for the client. This is done for each connection request. The logic of handling the client is defined within the handler which goes thus:

 

from socket import *

from threading import *

def handler(clientsock,addr):

while 1:

data = clientsock.recv(BUFSIZ)

if not data: break

clientsock.send(‘echoed:..’, data)

 

clientsock.close()

if __name__==’__main__’:

HOST = ‘localhost’

PORT = 21567

BUFSIZ = 1024

ADDR = (HOST, PORT)

serversock = socket(AF_INET, SOCK_STREAM)

serversock.bind(ADDR)

serversock.listen(2)

 

while 1:

print ‘waiting for connection…’

clientsock, addr = serversock.accept()

print ‘…connected from:’, addr

thread.start_new_thread(handler, (clientsock, addr))

#some other cleanup code if necessary

 

The handler has to be defined before calling it. The handler contains the same code that was contained in the inner while loop previously. This example is not optimized. But it serves the purpose of providing a different approach for serving multiple clients. This brings us to the end of this section.

 

Parting Thoughts:

  1. The low level sockets can be mixed and matched with other modules such as threads and forks to create a server capable of serving multiple clients simultaneously.
  2. While using threading approach the locking and synchronization issues must be kept in mind.
  3. The security measures must be taken care of when creating FTP kind of servers.

 

This brings us to the end of this discussion. In the introductory section I mentioned flexibility as one of the core aspects of Python. Ability to work with low-level sockets is one of them. But at the other end of the spectrum are the pre-built yet extensible web-servers. These will be discussed in the near future. Till then…