DevOps – A jack of all trades?

A jack of all trades is a master of none, but oftentimes better than a master of one

Complete popular saying

Being a DevOps engineer has become a popular topic nowadays. Companies are looking for DevOps instead of specific people for sys-admin, network management, DB-admin, etc. They look for a Jack of all trades. But, what is, in reality, DevOps?

Devops is the fusion of the agile practices and operations practices in the software development world.

DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support.

Ernest Mueler – The Agile Admin

From these ideas, we can understand DevOps is not a person responsible for knowing everything about the systems but as a practice that can be performed by any number of people required for the projects. Depending on the size of the project, the people in charge of operations can be from one to a big team. So, yeah, DevOps usually is a Jack of all trades, master of none (in reality, a DevOps engineer has a better understanding of certain areas), but better than master of one.

Learning DevOps

Learning DevOps is an extensive task. You usually need to learn a bit from everything and depending on your job and responsibilities, you specialize in something.

If you look at the diagram above you may recognize some tools that are common use like Office or Slack, or some others like Git, Jenkins, and Jira that I mentioned in my post about V&V. Following this diagram, we are going to understand a little from the build, continuous integration and deploy!

Configuring virtual machine!

When learning DevOps, usually the conventional way is to install your own virtual machine and configure everything manually, buts let’s speed things a little bit. Nowadays, getting a remote virtual machine is really fast using services like AWS or Google Cloud.

For this activity, I decided to use DigitalOcean because it is one platform I heard of before but never used, and they offered me $100 in credit, so this activity would be free for me to do! You can use my referral code to get $100, too (I get $25 for each person who uses it).

My dashboard, they gave me the $100 dollars credit! 😁

In DigitalOcean the virtual machines are called Droplets:

Just selecting the cheapest stuff for this activity, but you can go nuts with the $100 :).

And my droplet is created and it took just 5 minutes!

I access my virtual machine using ssh, I registered an SSH so I don’t need to login using a password:

Repository and production environment

For this, I decided to use node.js because it is one of the platforms I’m most comfortable with, so I installed it on my ubuntu server. Also, I created a repository for the application I’m going to be running on the server.

One of the reasons I heard a lot before DigitalOcean is because they have really well-made tutorials about web development, and for this activity, it wasn’t the exception. They have a tutorial on deploying a node.js app. I’ll follow it and just comment on the most important stuff.

I cloned the repository into the server, and run the project:

If I curl into the url the we get:

It’s important to understand that it’ll be not possible to access the server remotely because the port 3000 is not configured to be accessible publicly.

PM2 is a tool created with Node.js that helps us with the management of our production application. PM2 specifically is a Production Process Manager, it will manage the app and in case it crashes or is killed it will restart it.

We run pm2 to start and watch our application, this way it will restart if it detects a change:

Finally, we need a reverse proxy to be able to access the content externally, for this we use Nginx (DigitalOcean also has a tutorial on how to configure Nginx):

This redirects requests from the port 80 to localhost:3000

And now we are able to visit the site externally:

Continuous Integration

For our continuous integration, we need to set an automatic way to update and build our app in the server. The most simple way for this is using a Cron Job to pull the repository every N time.

In this app, I decided to update every 5 minutes, because it’s a time I considered that it would take me to be doing changes and being able to check if everything works correctly.

*/5 * * * * cd /root/devops_activity && git pull origin main

In reality, this “N time” totally depends on the needs of your app. Is it a non-critical app that is continuously updated and needs to have the most recent version, maybe you may want to update it every minute. If your app is critical, and you need to make sure it is available while users use it, maybe the best option would be to update it at 5am every day.

If there are changes to the repository, those changes will be automatically pulled by our server every 5 minutes. Because PM2 is watching the files, it will automatically restart the app!

It’s important to know that you can check CRON logs using `grep CRON /var/log/syslog`, this way if the job is not working you can see if it’s throwing an error or something.

When working with Cron is important to understand that not only errors can happen (which can be seen in the logs), but if you’re running a script, weird stuff may happen (like the script freezing or entering an infinite loop), and you may end with duplicated jobs running. Two ways to avoid this are by locking a file or creating a PID File. You can check more of this here!

Testing

One way to have a certain idea that things work as intended is through tests. In this case, I installed Jest to the project. I created a simple sum test (and its functionality), so I had something to test.

Once I had it, I could run jest, and if all tests pass, the script returns a zero. If something fails, then it returns a one. This helps in defining actions depending on this.

Project status

Tons of projects in Github use badges to indicate their status, if the service is on, the test coverage, etc. I created two badges to indicate if the test passed or something failed using shields.io and uploaded it to my repo.

I modified my Cron Job to set a badge if the test failed or if it passed, the correct thing, in this case, would be to create a script, but I was too lazy, so I just concatenated commands and used the shortcut functions of “&&” and “||.”

cd /root/devops_activity && git pull origin main; npm install; rm badges/status.svg; npm run test && cp badges/success.svg badges/status.svg || cp badges/failed.svg badges/status.svg; git commit -am "status" && git push origin main

In the command, you can see that first, I enter the folder, and then I pull the changes. I install any missing libraries. Remove the current badge. Then I run the tests; if everything goes well, it copies the success badge; if the tests fail, it copies the fail badge and commits and pushes any changes (if the badge changed).

If test pass:

If tests fail the badge changes:

And now it’s done, there is a simple environment that can continue to grow as the project grows, and there is automation for deploying and testing!

Conclusions

Doing this exercise was interesting, but there wasn’t too much new stuff. I learned how to deploy a node.js app for production, something I have never done before, and used DigitalOcean as a new tool, but it wasn’t different at all from what I used before in AWS.

Usually, a DevOps engineer is expected to know how these SaaS and PaaS platforms work because everything is moving to the cloud. Because of this, the platforms have made it really easy to deploy new stuff and create CI/CD pipelines. For example, Heroku (a tool I use a lot) has a way to automatically set your repository, and it watches it for changes and runs the build commands. In fact, DigitalOcean has a similar service:

The disadvantage of using these tools is that you are limited to the configuration options given by the platform, maybe the platform can be extensible, and you can add the stuff you need, but then you would be required to learn a new system.

Besides the facilities that are giving the platforms, is important to understand that a DevOps job never ends. If a platform gives some facility, then there is more stuff to focus the attention on. I would consider a DevOps engineer an essential part of any team.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.