Open Source in a Small Project. Usage Experience
Author / Автор: Sergey Satskiy
Publication date / Опубликовано: 20.06.2005
Version / Версия текста: 1.0
Diving into the world of Open Source for the first time, one does not often know where to start. They do not know how to organize work in a project or where to get necessary tools. The purpose of the article is to create a starting point for newly opened projects. The article is focused on Open Source tools, which support collective work of developers but not on libraries and technologies that are used for code development.
Explanation of motivation for certain tools selection has been deliberately ommited. Quite often the motivation was very specific to a particular project while the article was planned to be as general as possible.
Some time passed after the project was started. New Open Source tools have become available in the Internet. That is why some decisions that were made for very srong reasons now may seem to be absolutely wrong. Nevertheless such experience may be of interest to a reader.
The article is built as a consequtive narration in order to explain how separate subsystems were linked to each other to combine a single system as well as to show what factors may be essential.
A company that developed the project was just starting to work on the software market. In fact that was the second software project in the company and the the first based on the UNIX like operating system. It was not supposed to put any considerable finances either to hardware or software that is why Open Source technologies suited perfectly.
The project target was software that would be run on a single board computer based on PC architecture and built on an Intel compatible processor produced by VIA. All the software including an operating system had to reside on a Compact Flash memory card. There was also a set of interface boards that had to be properly controlled. There were no strict limits either for RAM size or for Compact Flash size however it was unacceptable to be irresponcive in that area, so approximate limits were taken as 256 Mb of Compact Flash and the same for RAM.
The project started with one developer. There were three developers, one tester and one designer at the end of the project. Furthermore, there were application domain experts, hardware engineers and other specialists in the project.
It was a big advantage that the project started from scratch. It was possible to try ideas that are usually impossible to apply in the middle of the project because of numerous potential changes and possible side effects.
Linux was selected as a target platform operating system. One of the popular Russian distributives available at the market at that time was taken as a starting point. Linux was installed on the developers' computers and with some changes on the target platform. These changes will be briefly described below.
Source Code Collective Work
As soon as more than one developer was going to work on the project, a server was used to support all the team activities. The server ran the same Linux as was used for developers' computers and the target platform. The server hardware had almost the lowest profile among all the project computers. The server was built on an Intel Celeron processor and had 256 Mb RAM. The server was more powerful only than the target platform. The server hardware was the only direct expences on the hardware and software in addition to expences on the hardware of developers working places and the target platform.
CVS was used as a concurrent versions system (http://savannah.nongnu.org/projects/cvs). Developers used command line utilities to get access to the source codes or an X Windows graphics wrapper LinCVS (http://www.lincvs.com). MS Windows users used WinCVS (http://cvsgui.sourceforge.net) to get similar functionality. WinCVS allows using scripts for automatization some CVS repository operations. In order to enable this feature Python for Windows (http://www.python.org) or TCL (http://tcl.sourceforge.net) should be installed.
Desides the source code, CVS controlled some of the documentation that was created in parallel to the source code development.
CVS coped with the task. The most inconvenient about the work with CVS was the impossibility to support blocking of the file that had to be changed. It led to manul merge of the files that were modified by more than one developer at a time. This drawback was partially compensated by very short cycles of changes though. Moreover the problem was also mitigated by the fact of small number of developers. One more non-ideal aspect was a way of binary files storing. Some of the documentation used the RTF format. Each new version of that type of files was stored entirely as a binary file that implied waste of disk space.
Bug Tracking System
A necessity to have access to the bug tracking system was not only for those who were involved into the project directly but also for other company employees. MS Windows operating system was installed on many companys' computers so one of the requirements to the bug tracking system was to be accessed via web interface.
Another significant requirement was simplicity of installation, setting up, supporting of the local language in the interface and intuitive interface. A suitable bug tracking system was found quickly. That was a little known system named Mantis (http://www.mantisbt.org). Mantis stores all the bug information in the MySQL database and actually is a set of PHP (http://www.php.net) scripts which are working under control of a web server. Thus it was necessary to run MySQL database server (http://www.mysql.com) and Apache web server (http://www.apache.org). At this stage the scheme of working on the project was as shown on Figure 1.
Fifure 1. Source code and bug database access.
Mantis coped with the task very well. The only drawback was that it did not support e-mail notifications about changes in the database. This feature support was to be implemented later, however it was absent at the time of installation. Perhaps for large projects other bug tracking systems like Bugzilla (http://www.bugzilla.org) or GNATS (http://www.gnu.org/software/gnats) would be a better choice, while systems like Mantis would do for small projects.
All the documentation that was created in the project can be split into three groups. The first group had the documents that were delivered to a customer. Those documents used the RTF format. The second group had general documents, like the one that describes a single software module. Those documents also used the RTF format. The last group had the documents dealing with the source code directly. An example of such a document is the one that describes a library source code.
In order to avoid as much as possible discrepancies between the documentation and the actual source code a decision was taken to use a documentation generation tool. Doxygen (http://www.doxygen.org) was selected. Doxygen supports many styles of special remarks in the source code, however it was planned to have a uniform style across the project. To achieve a uniform style and to make developers give remarks to their own code, corresponding sections were included into a coding standard. The standard was not to be a complete reference of how to use all the programming language elements, idioms etc. but it was to give a general feeling of the style. Moreover, the coding standard size was planned to be as short as possible (see ).
Doxygen uses external tool Graphviz (http://www.graphviz.org) for graphical representation of dependencies. Once the tool is installed and the corresponding configuration is done the generated documentation will have graphics dependencies diagrams between classes and project files.
The generated documentation can be prepared in different formats. One of them is html. It turned out that it was very convenient to have access to the generated html files via web interface because Apache server had already been run. Doxygen appeared to be very helpful for the libraries, though for some terminal executable modules it turned to be excessive. On the other hand even for those terminal modules doxygen was firstly, a stimula for developers to give remarks in general and secondly, to give them clearly.
The company already had equipment to make backup copies. That equipment was controlled by MS Windows run software. To use that equipment the following scheme was used:
- Once during a certain period a shell script was run. The script created a tar archive with all necessary files. Then the archive was packed by bzip2 utility. The shell script was run by crond daemon.
- The ready :.tar.bz2 archive was copied to a directory of a user which was created to support the backup system.
- proftpd (http://www.proftpd.org) ftp server was run on the server.
- External backup software used ftp access to collect the :.tar.bz2 archive file. The special user account was used to get access to the file.
The only advantage of the described scheme was that it did work. Large projects should employ other solutions.
Web Access to CVS
At some stages it turned out that it is convenient to have access to CVS via web interface. This functionality is provided by ViewCVS (http://viewcvs.sourceforge.net).
ViewCVS coped with the task very well.
Building and Testing
The project consisted of several subprojects - libraries and executable modules. Building of the project was done using make utility and make files. Each subproject had its own make file which had a set of mandatory targets inside and possibly some subproject specific targets. Rules, common for all subprojects, were relocated into a separate file which was included by each of the make files.
The following targets were mandatory:
- dependency. This target built dependencies between source code files.
- item. This target built a particular subproject. The ready to use libraries, configuration files, executable modules etc. were not installed by the target.
- install. This target executed item target and then installed libraries, configuration files, executable modules etc.
- doc. This target prepared a generated by source code documentation.
- webdoc. This target executed doc target and then copied html files into a directory for the access via web interface. RTF documentation was also copied to web access directories by this target.
- test. Each subproject had a directory with unit test source code. The requirement to the unit test utilities was to print easy to read results on the standard output. The test target built those unit test utilities.
- exectest. This target executed unit test utilities for a subproject. Each unit test utility was run with a memory leak control utility.
If any of the mandatory targets was not applicable for a particular subproject then it was included into the corresponding make file anyway, but no real actions were executed. Usually a message was printed that the target was not applicable.
The whole project build was done by a shell script which made a CVS code snapshot and then called appropriate targets of each subproject taking into covsideration both the right sequence of targets in a single project and the right sequence of building of subprojects. One more shell script worked with crond daemon. All the output of the building process including the unit test utilities output was collected into an html file which was copied into the appropriate directory for access via web interface. The script was run on every night basis.
The described scheme allowed to control the project status every morning via web interface and to get any building or test execution (including memory leaks) problem feedback very quickly. Of course this kind of tests does not cover all the test necessity however it covers some parts of regression and sanity tests. The up to date documentation was also always accessible via web interface.
The building system coped with the task perfectly.
Day by day the number of files in the project was growing as well as their size. The compilation time was growing accordingly. Building time of separate modules on a developer computer was consuming more time that it was planned. On the other side, it was a rare a situation when all the developers were compiling something at the same time. It turned out that all the available computers could be used to build a single module. This functionality was provided by distcc (http://distcc.samba.org) package. It enables distributed compilation on many computers on per file base. Usage of this package saved a lot of time on recompilations. The changes to use distcc instead of gcc were as easy as only a couple of lines in the common rules file.
Setting Up a Build on the Target System
One more shell script which was run by crond daemon was installing a new build on the target system. cron daemon on the server ran the script early morning when the build process had been completed for sure. The script task was to check that the building process had been completed successfully then to check a network connection with the target system and then to check versions of the new and target installed bulds. If everything was all right the script copied libraries, configuration files and executable files to the target system.
proftpd ftp server was installed on the target system to support automatized copying process. ncftp (http://www.ncftp.com) ftp client was used on the server side. ncftp simplified automatization of the copying process. The standard ftp client with expect utility could also be used to accomplish this task.
The copying script could be used for installing a new version of software on the target system at any time. The only action of running it manually was required.
The script was run once a day. For many projects it may be too often as soon as a test cycle may take more time to be completed. For the described project the frequency had been acceptable till a certain stage when the project became relatively stable.
The initial Linux distributive was specifically prepared to be installed on the target system. A necessary minimum of the software was selected, system startup scipts were modified, configuration files were changed as well, IP protocol stack settings were tuned etc. The target system kernel was compiled from source code (http://www.kernel.org) bearing in mind the target system hardware and excluding a functionality that was not used. The above actions provided a minimal size of the software as well as maximal bootup and system speed.
The compact flash binary image for the target system was prepared on one of the developer's computer with developed shell scripts. Then the prepared image was moved on the compact flash card.
Linux kernel updates were issued a few times during the project. Every time the kernel was recompiled and an updated binary image of flash was prepared.
It is worth mentioning some not very important things which also helped to work on a project.
- ntp utility was used to synchronize time on the project computers. The utility was run by crond daemon every night and corrected computers' clocks.
- Third party documentation like programming language reference and used protocols standards was also available via web interface.
- At some stage many application area terms came up. They were widely used in source code. In order to make identifiers uniform and have short description of each term a dictionary with web access was introduced. There were search, add and delete facilities. Slightly modified glossar project was used to support the dictionary.
Complete Scheme of Working on the Project
The structural scheme of separate components interactions which supported work on the project is shown below.
Figure 2. Complete scheme of working on the project.
The described scheme was not built in a moment. It was growing in parallel with project development. Inspite of that there were no considerable problems caused by changes in the scheme of working. Experience of using Open Source was very successful.
It is necessary to say that the selected projects are not a single alternative each at their area. At any area it is possible to find some suitable options. In each case the choice depends on advantagies and disadvantagies of competitors, personal preferences, experience acquainted before etc.
A question may come up about Open Source developer tools and libraries that were used in the project. If the article's feed back demonstrates an interest to the possible continuation, the author will be happy to share that experience.
The table below has links and short description of Open Source projects which were used in the project. Needless to say that some of them are included into popular Linux distributives which help to avoid downloading packages from the Internet if Linux is used as operating system.
References and Other Links
- Free Software Foundation. http://www.gnu.org
- Open Source home site. http://www.opensource.org
- Directory of Open Source software which is distributed under various licenses. http://www.freshmeat.net
- C++ coding standard which is being used by the author. http://satsky.spb.ru
Verbatim copying and distribution of this entire article is permitted in any medium,
provided this notice is preserved.
Разрешается копирование и распространение этой статьи любым способом без внесения изменений,
при условии, что это разрешение сохраняется.