Monday, May 27, 2013

The Engineering Awesomeness of F1

... or what software engineers could learn with F1

See that smoke behind the car? Not good...

For a long time, the main requirement for Formula 1 racing was power. Everything in the car but the pilot would last for only one race, if much - very often engines or gearboxes would blow up in the middle of a race. Indeed, within the same Grand Prix (i.e., practice sessions, qualifying and the race itself) different engines could be used.

But, of course, it was not all about engines. Every team updated their cars during the season, changing aerodynamic parts, suspension, breaks, etc. And how did they know whether an update designed for the car was really working? Unlimited testing. Even so I suppose it was still difficult to measure evolution due to environmental conditions (going from track grip and temperature to evolution of the driver himself). Nonetheless, with enough mileage, statistics was a friend.


But then, in the last decade, everything changed. Let's talk about F1 challenges for engineering in 2013.


Only 8 engines can be used by a driver throughout the entire year, which means an average of more than 2 Grand Prix (GP) per engine. Gearboxes must last at least 5 GP. We can say that reliability became a main concern. Refueling is not allowed during races, thus low fuel consumption is a must. Power, of course, is still important. Driveability, which is the engine's response to drivers input (through the throttle pedal), can be changed from race to race. What about space? The smaller the engine, the better the car can be designed around it. Oh, that remembers me of weight, which has big effects on the car's performance. Now, that's a good looking list of non-functional requirements, right? All of this in a very competitive environment where thousandths of a second can make a big difference.

I think it's fair to say that F1 is the place where trade-offs on non-functional requirements are the most visible. This constant trade-off between performance, reliability and packaging in its different measures is present in the design of all elements of the car, including engines, gearboxes, brakes, KERS, and suspension. All of this within strict regulations that define maximum/minimal size and weight of some specific components, and at the same time exploring the breaches in the regulation to create innovative solutions and gain an edge against competition. After design, there is car set-up, changed on every race, seeking for the best compromise of qualifying performance, race pace, tyre consumption, heavy-load pace, light-load pace and pit-stop strategy. Lastly, after the initial design, and in addition to set-up changes, we go back to evolution, in terms of updates to the car.

If in the not-so-old days the teams had unlimited testing, allowing to evaluate their updates, nowadays it is very, very limited. It consists pretty much of a few collective pre-season testing sessions, 4-days each, plus 4 days of straight-line tests for each team. I'm sad for the drivers - after all even in programming, which is far more theoretical than race-driving, it is difficult to improve without practicing. However, from the engineering point of view, this is just awesome. What teams do in order to compensate this lack of on-track testing?

  • Computer simulation - Now the teams rely a hell of a lot more on computer models to understand the effect of changes in the car. As aerodynamics are so important for performance, a key factor here is computational fluid dynamics (CFD). But not just that, top teams also develop simulators which probably are the most realistic racing video-games ever made.
  • Wind tunnels - this is when you build a scale model of your car, take it to a tunnel with a huge fan, and see how the air flows through and around the car. The limitation here is that the scale model has to be at most 60% of the real size, while the maximum speed is 180km/h. Compare that with last Spanish GP, where the maximum speed during qualifying was 318.5 km/h. The more marginal the gains to be obtained, the more important is the precision of the models.

    Due to these limitations, the last resort is to evaluate updates during the GP weekend, in the...
  • Practice sessions - In a GP weekend, the qualifying session is preceded by 3 free-practice sessions. Usually, 2 sessions of 1.5-hour on Friday, plus a 1-hour session on Saturday. It amounts to 4 hours of on track action, which used to serve to acclimatize drivers to the track and to adjust car set-ups. Now, they also use it to evaluate new parts for the car, specially during the first session. So, the testing is very time-constrained, in a track shared with all the other cars, tyres degrade really fast, and, you know, changing some parts of a car can be time-consuming as well. Oh, and don't forget that rain can ruin the engineers attempt to evaluate updates.

What looks like green paint is used to check air flow
With so many constraints, it's incredible that they are actually able to improve a lot throughout the year. Nonetheless, some top teams eventually faced big problems - in 2012, it was Ferrari; now, 2013, it is McLaren. The guilty is only one: correlation. This is similar to Brian Smith's discussion about computing on models and acting on the real world, which (as he argues) explains why proving correctness of a software does not give us any guarantee that it will behave correctly once it's out there in the real world. So, designers make models and simulations and find out that some changes will make the car behave in a specific way, but when it hits the track, they eventually find out that it's not the case. Then, it doesn't matter what 'virtual' improvements are made, since they will not reflect on actual improvements. When this happens, what is really important is to fix the correlation issue, i.e., to ensure that the model being used corresponds to the real world.

Then, there is the last F1 aspect I'd like to discuss (for now), which is all about Big Data. F1 cars feature hundreds of sensors, which send real time data about the car, from brakes temperatures to lateral acceleration and component failures. This is used during a race, for instance, to identify problems with the car (and possibly fix it), to inform the driver where (and how) he can improve, to decide when to pit-stop, and to estimate target times for each lap. For these last two it is also required to have information about the other cars on track.

Additional sensors behind the left tyre; these are not used in races.

Besides all these challenges, there are the more specific aspects of automotive technologies, such as combustion, energy recovery, materials, and so on. Anyway, summing up just the challenges we discussed, we have:
  • Difficult compromises between non-functional requirements, in a very competitive environment, with strict technical regulations;
  • Lack of on-track testing forcing the teams to rely on models and simulations;
  • Gathering and processing big data in real time.
Pretty awesome, right? I bet we, software engineers, could learn a bit or two with these guys.


Images copyright: Ferrari with blown engine, from http://www.dailymail.co.uk. Mercedes with sensor, from http://www.crash.net. Red Bull with flow viz, from http://www.auto123.com.

Monday, May 20, 2013

Highlights from ICSE 2013

Last weekend was the beginning of the 35th International Conference on Software Engineering (a.k.a., ICSE'13). Here are my comments on some of the papers that will be presented in the main track:

  • Beyond Boolean Product-Line Model Checking: Dealing with Feature Attributes and Multi-features
  • Analysis of User Comments: An Approach for Software Requirements Evolution
  • What Good Are Strong Specifications?
  • UML in Practice
  • GuideArch: Guiding the Exploration of Architectural Solution Space under Uncertainty

By the way, "highlights" are by no means "best papers", just a selection based on my personal interests. Check out the proceedings to see other good papers within your field.



Beyond Boolean Product-Line Model Checking: Dealing with Feature Attributes and Multi-features
by Maxime Cordy, Pierre-Yves Schobbens, Patrick Heymans and Axel Legay

This paper is about two important concerns for variability models: numerical attributes and multi-features. Usually features can be seen as boolean: either they're there (in the product), or they're not. Numerical attributes, on the other hand, are... numerical. I mean, it's not just a matter of whether they're there or not, but instead of what is their value. Examples from the paper are timeout and message size (in a communication protocol). The other thing is multi-features, which are features that may have multiple instances. E.g., think about a smart house, which may NOT have a video camera, may DO have a camera, and may also have MULTIPLE cameras (each with their own configuration).

What I really liked on this paper is that it describes the ambiguities that arise when defining multi-features with children features, with respect to group cardinalities and feature cardinalities. It then goes on to define precise semantics that prevent this ambiguity. It also handles the efficiency problem of reasoning with numeric attributes in variability models, by mapping them to sets of boolean variables. So they have two versions of their model-checking tool: one using a Satisfiability Modulo Theory (SMT) solver, which handles numeric attributes but inefficiently; and another one which convert them to boolean attributes, which is more efficient but also more limited.

Another good quality, from the writing point of view, is that instead of jumping straight to the underlying formalism (which, due to space limitations, is always a tempting option) and relying on the reader to decode it, the paper provides clear, precise definitions of the problem and of the solution in plain old english text (POET? Is that a thing?)

This work addresses some of the problems we're facing extending a framework on requirements for adaptive systems; hopefully we can borrow some of these ideas to improve our own work.


Analysis of User Comments: An Approach for Software Requirements Evolution
by Laura V. Galvis Carreño and Kristina Winbladh

We all know the importance of user feedback for improving a product. If before the use of Internet for distributing software the problem was "how can we get this feedback?", the problem now is "how can we go through the sheer amount of feedback available and capture what is useful?" The idea in this paper is to automatically extract the relevant information from said feedback, providing a concise report which will be an input for requirements analysis. This report include the topics mentioned in the feedback and excerpts related to that topic.

The evaluation of this kind of approach is quite tricky. They compared results from the automatic extraction with results from manual extraction. The trade-off here is that the manual extraction was done by the same people that developed the approach, thus there is some bias as their heuristics and criteria were somehow embedded in the automatic extraction. On the other hand, if the manual extraction was done by others, perhaps it would not be feasible to compare the two results. Said that, I think the authors did a pretty good job with the evaluation.

Talking about evaluation, it would be good if they make available their (complete) data set, as well as their implementation, to allow the replication of the experiment. Also, I wish I had a popular product of my own on which to test this approach ;)


What Good Are Strong Specifications?
by Nadia Polikarpova, Carlo A. Furia, Yu Pei, Yi Wei, and Bertrand Meyer

There is some prejudice against formal methods. Been there, done that. By the way, take a look on what Dijkstra had to say about this ("On the Economy of doing Mathematics"). Nevertheless, practitioners have been very keen to jump into the boat of test-driven development and design-by-contract (omg, was this all a worldwide plot to gradually make people unknowingly adopt formal methods? If so, hats up!). Anyway, this paper describes these approaches as providing lightweight specifications, and try to find out what would be the comparative benefit of adopting strong (heavyweight?) specifications, with pre-conditions, post-conditions and invariants (which, according to my spell-checker, is not a word).

I like empirical work that challenges "conventional wisdom", independently of whether the results are positive or negative. The result here is that better specifications lead to more faults being identified. What about effort? "These numbers suggest that the specification overhead of MBC is moderate and abundantly paid off by the advantages in terms of errors found and quality of documentation". Well, for me this conclusion was far-fetched. First, the spec/code ratio of the strong specification is more than the double of the lightweight ones. Second, while they compare this ratio with the ratio of other approaches, it boils down to how difficult it is and how long it takes to create those lines, which is (understandably) not compared.

So, my provocative question is: for better specifications we need better techniques/formalisms/technologies, or do we need better education on using what's out there? Moreover, the specification checks the program, but what/who checks the specification?



UML in Practice
by Marian Petre

UML is the de facto standard for software modeling. Or is it? If we look in academic papers, it's usual to see statements in the likes of "we adopted (insert UML diagram here) because it is an industrial standard, largely adopted by practitioners..." (the same goes for BPMN, btw). This paper tries to desmitify that notion, through interviews with 50 software developers (from 50 different companies). The results are...  *drumroll*
  • No UML (35/50)
  • Retrofit (1/50)
  • Selective (11/50)
  • Automated code generation (3/50)
  • Wholehearted (none)
From 'selective use', the most adopted diagrams are class diagrams (7), sequence diagrams (6) and activity diagrams (also 6). While these results are interesting by themselves, I wish the author had also explicitly identified if they use some other kind of modeling. After all, it may be the case that UML is not largely adopted in general, but largely adopted within the set of people that use some kind of modeling.

Anyway, besides an overview of UML criticisms in the literature, the paper provides more details and discuss the answers from the interviews; it's a must read for anyone designing model languages. A key point is complexity. Sorry, but I have to shout it out loud now: guys, let's keep things simple! Let's create our new approaches/ models/ frameworks/ tools, but then let's consciously work toward simplifying them; we don't want to increase complexity, we want to tame it.

Back to the article, some broad categories of criticism described in the paper are: lack of context, overheads of understanding the notation, and issues of synchronization/consistency (meaning co-evolution of models). I'd like to finish this informal review with a remark from one of the participants of the study, who said his company dropped UML for an "in-house formalism that addresses the ‘whole’ system 'i.e. software, hardware and how they fit together,along with context, requirements (functional and nonfunctional),decisions, risks, etc." - I wish that company would share their solution with us! =)


GuideArch: Guiding the Exploration of Architectural Solution Space under Uncertainty
by Naeem Esfahani, Sam Malek and Kaveh Razavi

To make architectural decisions, we need information that we can only measure precisely later on - e.g., response time, battery consumption. However, the later we change a decision, the costlier it is. This conflict is the subject of this paper. It provides a framework for reasoning with unprecise information, which can be made more precise later on in the project. This unprecise information is the impact of an alternative towards quality properties, being expressed either as numerical range or as enumerations (e.g., low, medium, high). It then uses fuzzy logic to reason on these ranges.

This seems similar to the notions of softgoals and quality constraints in requirements modeling. Softgoals are goals for which there is no clear-cut achievement criteria. To make decisions we can analyze the subjective impact (contribution) of different alternatives onto the softgoals of interest. On the other hand, at some point we may define/assume a refinement of that softgoal (usually a quality constraint), becoming able to make objective decisions.

I particularly like the case study they give with 10 architectural decisions and their alternatives - more details here. They also provide a web-based tool, but I wonder if they used their approach when architecting the tool - after all, which quality property could lead them to use Silverlight?



I'd like to hear your views on these papers, feel yourself invited to continue the discussion in the comments below!

Wednesday, May 15, 2013

Create the thing you want to exist in the world

New post-it for my office =)



Send me your own drawing through the comments and I'll be happy to update the post with your pictures!

Friday, April 5, 2013

Turtlefy!

Remember The Turtle Language (I mean, Logo) from last post? I was so disappointed that this online version of Logo (you know, by Google, with Blockly, etc.) didn't have an actual turtle... well, but why complain, when you can change it yourself?

That online version of Logo is client-based Javascript, which means that we can change it as we wish! Of course, sometimes people can make our lifes difficult with obfuscation and such, but for this one it was pretty straightforward. The function that draws the "turtle" is Turtle.display, in Turtle.js, so I just had to change that code with a circly turtle drawing code.

Yayyy, now with a turtle!

So, you can grab the bookmarklet below and run it in Blockly's Logo.

Bookmarklet: Turtlefy!

Or, alternatively, just run the code below in your javascript console:


Turtle.display = function() {
  Turtle.ctxDisplay.globalCompositeOperation = 'copy';
  Turtle.ctxDisplay.drawImage(Turtle.ctxScratch.canvas, 0, 0);
  Turtle.ctxDisplay.globalCompositeOperation = 'source-over';

  if (Turtle.visible) {
    /* Draw the turtle body. */
    var radians = 2 * Math.PI * Turtle.heading / 360;
    var radius = Turtle.ctxScratch.lineWidth / 2 + 10;

    /* Translation/rotation of the turtle, so it can point to the right way */
    Turtle.ctxDisplay.translate(Turtle.x, Turtle.y);
    Turtle.ctxDisplay.rotate(radians);
    /* Turtle body */
    Turtle.ctxDisplay.beginPath();
    Turtle.ctxDisplay.arc(0, 0, radius, 0, 2 * Math.PI, false);
    Turtle.ctxDisplay.lineWidth = 3;
    Turtle.ctxDisplay.fillStyle = '#009900';
    Turtle.ctxDisplay.strokeStyle = '#0eee09';
    Turtle.ctxDisplay.fill();
    /*head*/
    Turtle.ctxDisplay.beginPath();
    Turtle.ctxDisplay.arc(0, - radius, radius/2, 0, 2 * Math.PI, false);
    Turtle.ctxDisplay.fill();
    var pawRadius = radius/3;
    /*left front paw*/
    Turtle.ctxDisplay.beginPath();
    Turtle.ctxDisplay.arc(1-radius, 7 - radius, pawRadius, 0, 2 * Math.PI, false);
    Turtle.ctxDisplay.fill();
    /*left back paw 2*/
    Turtle.ctxDisplay.beginPath();
    Turtle.ctxDisplay.arc(3-radius, radius -3, pawRadius, 0, 2 * Math.PI, false);
    Turtle.ctxDisplay.fill();
    /*right front paw*/
    Turtle.ctxDisplay.beginPath();
    Turtle.ctxDisplay.arc(radius-1, 7 - radius, pawRadius, 0, 2 * Math.PI, false);
    Turtle.ctxDisplay.fill();
    /*right back paw 2*/
    Turtle.ctxDisplay.beginPath();
    Turtle.ctxDisplay.arc(radius-3, radius -3, pawRadius, 0, 2 * Math.PI, false);
    Turtle.ctxDisplay.fill();
    /*undo the rotation/translation to not mess with the drawing*/
    Turtle.ctxDisplay.rotate(-radians);
    Turtle.ctxDisplay.translate(-Turtle.x, -Turtle.y); 
  }
};

The original code calculated the right position for the head - instead, this one just use canvas rotation. I have to say, though, that it gets dismembered with big fat turtles.

Friday, March 29, 2013

Logo online

Dude, I wish I could make this when a kid!
This week I heard the news of a kind of online version of Logo, using Blockly. I happened to have had 1 year of Logo at my elementary school, so I definitely had to check it out; it did ring a lot of bells!

Don't you know Logo? It's an old programming language created for educational purposes. It's probably full of cool features and etc., but for me (and I'm sure, for a lot of other people) it's The Turtle Language. In the center of the screen, there is a turtle. When it walks, it leaves a trace on the screen. You can tell the turtle to move forward, backward, or to turn (in degrees). With this, you can go about drawing with your computer by programming the turtle. Then there are loops, conditionals, change color, etc. This online version gladly works very well, and since it is "Blockly", we don't even have to learn any syntax or keyword. The down point is... that there is no turtle - it was replaced by a circle+arrow drawing. Sigh.

While nostalgically trying it out, I accidentally created a star, which evolved onto this psychodelic star. Try it out and play with the color parameters, it's cool to see the drawing being formed.

I can't say that early experience with Logo led me to become a programmer, but it probably helped. I still remember the amazement I felt when finding out about circles! Too bad the turtle didn't survive longer at my school - learning to draw an hexagon this way was no less exciting than playing hide and seek (ok, this is probably a nostalgic exaggeration, but still...)

Btw, if you're interested in teaching programming to kids, check this Maze game. Also, see how all this Blockly stuff started in this blog post.

Programming with Blockly. Yeah, I know, quite ugly... but it works very well!

Wednesday, January 30, 2013

QVT transformations with Eclipse [tutorial]

Eclipse has a running plugin for QVT Operational transformations. So, let's use it!

I suggest you start with the Eclipse Modeling Tools plus the Model-to-Model Operational QVT plugins (check how to download/install it).

Also, QVT requires a metamodel of the source and target languages being used (in case you don't know, a metamodel is a model that describes another model). In this example we'll use this simplified use case diagram metamodel. Lastly, of course, we need an actual model to be used as input! We will use this piece of a Meeting Scheduler use case diagram.

Ugly, right? Agreed...

If you want to create your own use case models, you can use the default editor that Eclipse can create, which allow you to edit models in a tree-based view. You can create an editor for our use case metamodel by following the steps I and II from this tutorial by Vitor Souza (in step II.2, you can use "usecasepackage" as the base package name). Then follow step IV for actually creating the model (the model object for the step IV.4 is just "Model").

But now, let's go the transformations!


Quick and Dirty version:

  1. Run Eclipse with QVT
  2. Create a new Operational QVT Project (File/New/Other.../Model to Model Transformation/Operational QVT Project). In the wizard, be sure to select "Create a simple project".
  3. Import the metamodel (.ecore) and the model (.usecase)
  4.   My favorite way of doing this is by just drag-and-dropping the files into the Project Explorer. Alternatively, right click your project, select 'Import...', General/File System
  5. Create a new Operational QVT Transformation (File/New/Other.../Model to Model Transformation/Operational QVT Transformation). In the Module name you write the name of the transformation you are creating.
  6. Now it is time for the obscure Eclipse configuration: go to Project/Properties/QVT settings/Metamodel Mappings, click 'Add' and define the source/target metamodels. In the source field you need to write the URI of the metamodel (in our example it's http://www.cin.ufpe.br/useCaseUri ). In the target field you can browse the current project and select the metamodel file.
  7. Code! Or use this transformation example.
  8. Now, more configuration. Right-click the transformation file and select Run As/Run Configurations...  , select 'Operational QVT Interpreter' and click the 'New launch configuration' button. Go to the Model URI field and Browse the project to select the input model file (.usecase)
  9. Press "Apply", to save these settings
  10. Run!


Now if everything is right you will see... nothing. Just go check the transformation results in your target model and be happy! Once these settings are done, all you need to run the transformation again is to select the transformation in the 'Run' dropdown button.

Thorough version:

  1. Run  Eclipse with QVT
  2.    Please make it wear a good tennis as not to harm its feet.
  3. Create a new Operational QVT Project. To do so, go to File/New/Other... (Ctrl+N) and then select 'Operational QVT Project' in the 'Model to Model Transformation' folder. In the wizard, be sure to select "Create a simple project", as shown in the pictures below. 
  4. File/New/Other...
    New Operational QVT Project

    Write the project name, select 'Create a simple project' and click Next

    Just finish
  5. Import the metamodel (.ecore) and the model (.usecase).  There are several ways to do this. My favorite one is to simply drag the files and drop them into Eclipse's Project Explorer (or Package Explorer, depending on the version of Eclipse).


  6. Create a new Operational QVT Transformation (File/New/Other.../Model to Model Transformation/Operational QVT Transformation). In the Module name you write the name of the transformation you are creating.

  7. Now it is time for the obscure Eclipse configuration: go to Project/Properties/QVT settings/Metamodel Mappings, click 'Add' and define the source/target metamodels. In the source field you need to write the URI of the metamodel (in our example it's http://www.cin.ufpe.br/useCaseUri ). In the target field you can browse the current project and select the metamodel file.


  8. We're almost done with the configuration, now you already can code your transformations. These slides may help you with that. Or, use this example based on the paper "Towards Architectural Evolution through Model Transformations", from SEKE2012:
    modeltype UseCase uses UseCase('http://www.cin.ufpe.br/useCaseUri');
    transformation AddActor(inout useCaseModel : UseCase);
    
    main()
    {
      useCaseModel.rootObjects()[Model].map applyAddActor();
    }
    
    mapping inout Model::applyAddActor()
    {
       self.actor += new Actor("newActor");
    }
    
    constructor Actor::Actor(myName : String)
    {
       name := myName;
    }
    
    
  9. Now, more configuration. Right-click the transformation file and select Run As/Run Configurations...  , select 'Operational QVT Interpreter' and click the 'New launch configuration' button. Go to the Model URI field and Browse the project to select the input model file (.usecase)


  10. Browse and select your input model
    After selecting the input model Eclipse may show an error like this: "Invalid source URI 'platform:/resource/UseCaseTransformation/MeetingScheduler.usecase' for parameter 'useCaseModel'". If this happens, you need to check if your model has the following attribute in its root node: xsi:schemaLocation="http://www.cin.ufpe.br/useCaseUri UseCase.ecore". This attribute is required for Eclipse to know which file to load for the given URI (even though we already defined that in step 5, go figure...)

  11. Press "Apply", to save these settings
  12. Press Run!
You can check the model to see if the new Actor was created:

Thursday, August 2, 2012

Basic instructions for git / GitHub

So, git is a version control system, and GitHub is a popular host for git. I don't know why it's so popular, but it may be because it makes it promotes forking and merging. If I want to extend or fix a bug in a publicly hosted project (e.g., jquery, Play framework, node), I can easily fork it and modify it at will. I can then submit (or not) my modifications to the owner of the original repository.

On a recent project, I finally decided to give it a try. I signed in at GitHub and installed its Windows client, which provides the basics: create local repository, create branches, commit, and publish/sync. After creating a local repository, I moved my files to the repository folder created at the git's deafult folder, went back to the client, commited it and published. That's it.

I found the dynamics quite interesting. I change my files, as usual. Then, anytime I want, I go to the client, which lists the modified files with the diff. Then I commit these files, describing what has changed. This is all local, and I can sync with the server anytime.

However, there were two other things I wanted to get done: to create a release and to publish it live (as it is an web project). This is not supported by the Windows client as of now, and it's not quite obvious to find out, so this is how I've done it:

How to create a release with GitHub Windows client

This can be done with the tagging mechanism. Once you create a tag, all its files are available as a single .zipped file. So, after commiting everything you want, you will need to open a command shell and add a tag refering to that commit.

In the client system, go to the repository, select the branch (if you haven't created a branch yet it will be in the default 'master' branch), then click on 'tools' / 'open a shell here'.

To find out the ID of the commit, use the git log command:

$ git log
or, for a brief version (preferred way):

$ git log --pretty=oneline

this will list all commits of this branch. In my case, it was this:


50b2ab641937cfcd9792923fa7bc47a40d6e51d0 balancing the difficulty
4edcda321423bece568e055b902e5d68d51251fe first commit

where the first collumn is the ID and the second collumn is the description of the commit. Then, you just need to use the git tag command, like this:

$ git tag -a 'tagName' 'commitId'

No, you don't need to tipe the entire Id, the first chars will suffice. For me, it was this:

$ git tag -a v1.0 50b2ab

It opens a text file - write the tag description, save it and close it. Now your release 1.0 is created. However, only in the local repository, you still need to send it to the server:


$ git push --tags

Ok, now you're done! Go to your GitHub page ( https://github.com/username ) and see that your release is right there in the tags' tab, with the option to download it as .zip and as .tar.gz

This was done based on the GitHub Learn site. I know, it is not as easy as it should be, but you probably won't be doing this often, anyhow.

The git tag command, without arguments, will list all tags you created so far:

$ git tag

The same can be achieved with

$ git describe --tags

No, I don't know what's the difference. Anyway, you may also delete your tags.

So, summing up:

$ git log --pretty=oneline
$ git tag -a 'tagName' 'commitId'
$ git push --tags


How to host your web project at GitHub

GitHub provides the feature of creating static pages for your project, on the likes of Google Sites, as well as a wiki. A third alternative is to create your own website, either for your user as a whole or for a particular project/repository.

To create a site for your GitHub project, open it in the GitHub client, create a new branch named 'gh-pages' and publish it. It will be available at http://username.github.com/repositoryName

Since my project was an web project, which was the actual site I wanted to get deployed, instead of creating a new branch I just used my 'master' branch. To do this, go to your repository in the Windows client, click on the branch name (at the top of the windows), then click 'MANAGE'. Then, click on the plus icon of the branch that you will use and type in 'gh-pages'. The plus icon reads "Create a new branch using this branch as the starting point".

After that, you can point to http://username.github.com/repositoryName and see your web system up and running. Actually, it may take a for it to be created, in the meantime GitHub will show a 404 error page.

So, summing up (2)


  1. Create a 'gh-pages' branch in your repository
  2. Access it at http://username.github.com/repositoryName


Points to explore later



And what about requirements?

The GitHub client is far, FAR from complete. However, it does provide the basic and most used functionalities with a good usability and a great response time. It does help its users. This is a lesson on requirements prioritization and release planning ;)