20 Jul 2019

SoPra Review

The SoPra (“Software Projekt”) is a module at Ulm University which requires groups of students to develop a software project over the course of a year. The module is mandatory for all students doing a computer science, software engineering or similar software development oriented programs. This includes “Informationssystemtechik”, the program i am doing. I did the project in the last year with a group of five fellow students and took the chance to try some new strategies, for the software development itself as well as for the project management. In this blog post i will review the differenct aspects and summarize my experiences. If you are interest in the actual code, it is available on github: github.com/SoPra-Team-10.

Our Project

Every year the task is to write an online multiplayer game. These games consist of a server for managing the game and a client for playing the game. Additionally an AI for the game and an editor for parts of the game (for example the players) needs to be developed. Each team needs to develop the client and the AI, and one of the server and the editor. The other component can be “bought” from another team, but needs to be maintained and extended.

For our year the game was based on the idea of quidditch. The game is turn based, thus no real time constraints need to be kept.

Deciding on a language

The initial problem for our team was to decide on a language and technology stack for the application. The constraints given by the specification where:

  • The server and the client communicate with WebSockets
  • The clients provides an graphical user interface
  • The server and the AI can run in a Docker container

The obvious choice, which was also the one choosen by most teams, is java as this is the language we learned during our introductory programming courses. As we are not happy with java (in short: implicit reference types, no operator overloading, weak template system, overly complex buildsystems) we quickly decided on not to use java. Initially we decided to use only one language as this would allow us to reuse most of our code for all components. Proposed alternatives where:

  • Kotlin: java but better
  • C#: java but from microsoft
  • Javascript: not java, even though the name suggest it
  • Python: not a new language, but certainly on the rise
  • C++: mature do-it-all language

Kotlin was ruled out due to the lack of an usable GUI-Framework as Kotlin is quite young. C# was ruled out due to the fact that we planned to develop all components cross platform and everything but Windows is still not fully supported, especially for GUIs. Javascript was ruled out due to the fact that we were not interested to descent into the weakly typed dependency hell that is javascript. Python was primarily ruled out due to the fact that many people of our team had zero experience with python additionally we feared the poor performance, especially for our AI. So the last option without any major drawbacks was C++, so that is the language we decided on.

Of the 35 teams that participated in our year 28 teams used java, five teams C#, one team used the unreal engine which required them to write a little bit of C++. We were the only team that wrote more than some lines of C++ or JS.

To mitigate the problems C++ is famous for, such as memory leaks, segmentation faults and undefined behaviour we decided to rely on the latest standard: C++17. This enabled us to use the tools provided by modern C++ such as shared pointers. To directly enforce good code we enabled all compiler warnings and treated them as errors, so that code with warnings does not compile. Additionally we heavily relied on AddressSanitizer to detect memory issues. As a build system we used CMake.

Initially we planned on using SFML as a GUI-Framework. The member of our team primarily responsible for frontend design was unhappy with the options for styling the UI so he decided to write our client in javascript using VueJs as the framework. In the end four people of our team worked on our C++-Codebase used for all components but the clients, and two people implemented the client in JS.

Our Git Workflow

It was mandatory to manage our code base using Git, something everybody in the team agreed on beeing the right choice. As we only had one repository for the complete project on the gitlab instance of the university we decided to do our main development on github. This allowed us to use as many repositorys as necessary and furthermore provided a better issue tracker and a better system for merge-/pull-requests.

We decided on using Git Flow for managing our branches: the master branch is the stable branch, more recent changes are on the develop branch and the actual development is done on feature branches. Even though some members of our team were not very experienced with using git we still had very little issues, especially close to zero merge conflicts. Additionally this concept allowed us to review all changes before merging them, so the more experienced members of our team could give advise to the other team members.

Code Review

We enforced code reviews for every merge request. Using the GitHub interface it is possible to specify how many people are required to review a merge request. We decided on setting this treshold to two. The C++-Subteam consisted of four people, the person that proposed the pull request is not eligible to review it. As a result two of the three other developers were required to review the code, this guaranted for all pull request to be merged suffienctly quickly with suffient review. The GitHub web user interface provides a very good interface for reviewing pull request: reviewers can be assigned, they can review the changes and annotate the code directly; the pull request can only be merged if all comments are resolved. Additionally it is possible to directly propose changes, which can be accepted by the initial author. Overall the Github-UI is miles ahead of the GitLab-UI even when considering GitLab Enterprise.

One problems with Code Reviews that we didn’t find a satisfactory solution for is how to guarantee the quality of code reviews. Sadly it is easy to just approve code without reading it and we didn’t find a mechanism to prevent this.

Continuous Integration

The second pillar of our quality assurance strategy for pull requests is the continuous integration. The CI runs on every pull request and first tries to compile the code. If the code compiles the unit tests are beeing run, they are each run ten times in random order to guarantee that there is no crosstalk between the tests. The compilation and tests run in a Docker container which is build from scratch everytime, this guarantees a clean working environment without any old artefacts. Additionally to the dynamic analysis SonarQube is used for static analysis, it is able to detect bugs and stylistic issues in the code.

If one step of the CI failed GitHub did not allow us to merge the pull request.

Additionally the CI built a Docker Container and deployed it to Docker-Hub to simplify deployment. Furthermore the documentation (see below) was rendered into a webpage and deployed as well. For our client the application was transpiled and deployed on a webserver.

Unit Test

A lot of our development was done in a bottom up fashion. This made it difficult for us to test the complete system, thus we heavily relied on unit tests. Overall our code was tested using close to 400 unit tests. Without unit tests we wouldn’t have been able to get the code running that fast. The only issue that occured with unit tests was the fact that some members of our teams were hesitant to write unit tests. Even after discovering bugs in their code they didn’t add tests to test this issue.

Documentation

We were required to write documentation for our code, for this we chose doxygen, the de-facto standard tool for C++. This was by far the most discussed mandatory aspect of the development. Especially for such a time limited and small project developed by a team with good communication the documentation was seldomly read, as it was easier to ask the responsible person. On the other hand seeing the educational aspect is was definitly the right choice to require a proper documentation.

Reusing Components

To be able to reuse as much code as possible for the different applications we aimed to make our code as modular as possible. For this we extracted most of the functionality into four shared librarys: messages (the network messages), network (websocket), gamelogic (the actual game) and util (logging, timer,…). These components were used by both the server and the AI.

The shared librarys were the aspect which resulted in the most issues by far. As the versions of the librarys were not fixed people would often change something in a library and then use this change in one of the components before the updated library was merged into the master branch. Additionaly we had some problems with ABI compatibility and outdated librarys, which resulted in a lot of rebuilds and reinstalls.

In retrospect are the shared librarys the only design decision i am not happy with. On the other hand i am not sure on proper alternatives: a lot of projects rely on git submodules for dependency management, this solves both issues but makes the usage of git much more complex.

Conclusion

Overall we managed to develop a good set of applications, something we even received particular praise for by the lecturer. Some aspects like code reviews and unit tests were not optimal but i am certain that these problems disappear with more experienced team members. For the aspect of resusing components i am not happy with our solution, for the next project that requires a modular approach i will try git submodules for dependency management.

From the educational aspect the project was the first project with multiple contributors for some of our team members, i believe they had the opportunity to learn a lot about proper software development. And even for the more experienced team members the project provided lots of room for experiments as seen above.