A Week With Golang: Porting sto-player To Go

Introduction

So, for some reason I got really interested in golang as of late and decided to dig into the language a bit. I think what started me off on it was noticing how my sto-player was occasionally a bit slow. Usually most operations were close to instantaneous, but some (especially starting/stopping the film) on occasion would take a couple seconds to process. I wanted to see if a compiled language would give me some added “oomph”. Having known about go for awhile it probably just stood out to me as a choice especially sincei it’s one of the “in” languages nowadays. I started off with the go tour and to be honest, some parts were a bit beyond me so I decided to just start hacking away at replicating sto-player into go.

Porting sto-player

The actual process of porting it was surprisingly easy (but with a few hiccups). Before I go on to detail what I did and how I did it, I guess I’ll start out with my thought process and the steps I took to implement the program.

With any biggish project, I start off with breaking it up into bite sized chunks to work on and this was no different. The pieces that go-player comprised of as far as I was concerned was:

  • A web front end - something that displays several different pages and responds differently depending on what page you are on what actions you are performing (POST v GET).
  • A file searcher - I want to be able to list all my movies under a certain directory and display them on a web page. I don’t want to have to go through them to find the actual files I’m after. I just want to give a directory and it’ll seperate the video files from everything else
  • The templating code - taking the movies I find and then presenting them on the front end.
  • A controller for omxplayer - I want to be able to control the omxplayer from my website so I need to be able to start it up, send commands to it and close it off when I am done.

Though I’ve hashed it out here in some detail, in the moment it was very much intuition - I knew that these were the areas that I needed to work on, and they were probably the best order to work on them as each one depending on the ones before it and further, I knew some parts would probably be a bit difficult and didn’t want to get bogged down figuring them out when I could be making progress elsewhere.

None of those sections were really hard to do, but I did run into some snags which I’ll detail below.

Bumps along the way

Some of these bumps are no doubt the lack of knowledge I had/have about the language, and others are just limitations of its implementation. That said, here are the snags I encountered:

  • I was used to being able to pass multiple objects into a template in web.py and then being able to use regular python code to configure the templates to my liking. In go, insofar as I can tell, you can only pass a single object; generally a struct. I had to reconfigure things a bit to get them the way I like and I still need to work on some things yet. For example I generated the table and its columns in sto-player in the template itself, but in go I will either need to create a template function or do it in the main code.
  • Speaking of which, I spent far more time than I’d care to admit wondering why I couldn’t access my struct in my template despite knowing I was doing everything right up until I finally discovered that access levels were based on case. This is one reason at least you should finish the go tutorial, so you don’t encounter these nasty surprises :)
  • The controller for the omxplayer was actually far easier than I was anticipating, the only hang up was the code to kill it, I ended up finding it on stackoverflow but ultimately don’t need it as now that I know how to send commands to the player, I can just send a quit command to it (“q”).

As you can see, nothing dramatic but some of these took a while to figure out. Now keep in mind that though I spent a little under a week on this, this was not full time and not even every day. Just a couple hours here and there - and probably most of that was spent working out bugs and/or the actual way to implement the above code.

Performance comparison

go-player actually runs pretty fast, and for me at least is generally instananeous. It performs the start/stopping of films far quicker than sto-player (which was were it generally the slowest) and it does have a zippy feel to it. I haven’t run any performance tests to really hash it out, but it’s definitely noticeably faster.

My thoughts on Golang

The bad

So after this is all said and done, what are my thoughts? I came across someone the other day who described it as a very opinionated language and I think this is rather true. Due to the way the compiler has been set up, it will fail to compile if you have unused variables, imported things not in use and used the “wrong” bracing system (due to the way the compiler inserts semicolons if you don’t put a brace on the same line as an if statement for example, it will fail to compile). It depends on capitalisation to set access levels so if you have a naming convention that goes against it, you’re going to have to change it. The lack of consistency with variable assignments is also annoying. I would rather have one consistent system that is verbose, than having one based on context and continually forgeting I have to do it this way because I am not in a function.

Another small thing that was annoying is that it’s OO-lite. It has structs with which you can bolt on methods, but it’s not really the same and feels rather hackish to me, though I am sure plenty of others will disagree with my assessment. I also didn’t realise how much I missed method overloading till I realised I couldn’t use it :)

The good

I saw a chart recently that was pretty spot on I think in describing go. In terms of a mixture of developer-friendliness and performance-friendliness, go has a really good mixture of both. It’s designed to be fast and make it easy to develop in certain use cases and in those cases it excels. If you like languages like Ruby or Python you will feel far more at home than you would had you used Java or C#, and in saying that you’d get far better performance too (terms and conditions apply).

Using the language, while it was different it nevertheless still felt familiar and I was able to be productive quickly. I feel that this is its greatest strength. It makes a lower level language feel like a higher one and you don’t feel as out of your depth as you would have had you decided to get into C for example. Ultimately, I like the language, it has some quirks which I find annoying but I would like to play around with it more and create more things with it. In terms of my language repertoire, it’s definitely something I want on my “strong skills” list. I think it would be a nice complement to Java and Python. One day I might add C in the mix too :D

Anyways, that’s it for now guys. Catch you round ;)

New Gist: Nicer Looking Titles In Hugo

I wrote this gist the other day to produce clean titles in hugo. There might actually be away to do this within hugo, if there is feel free to drop me a line (my searching didn’t really produce any results). In any case, what I wanted to do was create a post in hugo with a filename in lowercase, spaces replaced with dashes and no special characters (all of which are annoyances in *nix systems). When you run this it comes up with a prompt asking for the title.

For this post this is what it looked like at the prompt:

title> new gist: nicer looking titles in hugo

Which after hitting enter opens me up a new file named new-gist-nicer-looking-titles-in-hugo.md and the title produced as you can see becomes New Gist: Nicer Looking Titles In Hugo. Since I got sick of editing the posts to do this for me it was a simple script that made things a tiny bit easier for me. Hope you guys enjoy it too :)

Refactoring Old Code

Recently I decided to refactor some old code as you can find here. There’s some more changes that I want to make eventually but the most immediate changes I wanted to make (and thus made) was to the movie and GeneratePlayer classes which handle the logic for the omxplayer wrapper.

Before I go to explain the changes, let me show you the before and after. The GeneratePlayer class was renamed and put into a seperate file for readability here.

What you’ll find is that the functionality is pretty much identical (I will be adding some better error handling at some point) but the chief difference is where that logic is located. Initially GP (GeneratePlayer) did very little other than generating the player. The movie class handled all the logic related to starting the film, stopping it and anything else related to it (pausing, fast forwarding et cetera). What this meant is to get this functionality I had to rely on some globals and essentially have to mentally keep track of what I am doing. This resulted in a lot of bugs when I was initally building and extending it because I would forget to set/unset values.

Since I first took a crack at this a couple years ago, I’ve gotten better at understanding and using OO principles (such as letting objects take care of themselves). What this has resulted in is much more easily maintainable code. All of the functionality stored in the movie class and global variables has been put into the now renamed Player class. Pretty much everything to do with the state and functionality of the omxplayer is in the player object. Another change is applying a bit of abstraction when it comes to the various input commands. Aside from killing the film, all the commands are processed by the sendCommandToFilm() method (which basically interprets the input given by the browser into terminal commands omxplayer understands).

This was a fun project and I will probably to it in a few days to make some further changes :) Oh, other things I did (which is probably less noticeable) was I updated the version of bootstrap I used and changed the name of the player itself just because it was relatively simple in the scheme of things. Anyways, that’s it for now. Catch y’all ‘round :)

Early days hacking on Doom

So, I’ve recently started hacking on doom as you would have gathered from my other post and I have been hacking on it a bit since then. My overall goals are somewhat simple:

  • Port to SDL
  • Get it working on OS X
  • Get it working on higher resolutions

SDL is what most of the doom source ports seem to be using nowadays, so I guess I’ll jump on board with that. This will allow it to be portable and thus usable on OS X which is my next goal. The reason why the code as is, isn’t directly portable is because it relies on X and that’s not available on OS X (natively). Once I’ve got it portable and running on OS X I can then focus on having adjustable resolutions (working on this prior to the other 2 steps will just result in me having to redo things).

My short term goals are far more modest:

  • In so far as possible, fix any warnings reported by gcc
  • Improve comments in the code
  • Implement a different coding style (I don’t like how one line if statements are done, for example)
  • Generate documentation, probably using Doxygen

So, I’ve been working on the 1st and 4th points primarily, but I have done a tiny bit on the 3rd. I’ve generated documentation with Doxygen, and while there isn’t a lot of documentation in the source code the Doxygen tool still is quite useful. Especially seeing the call graphs for functions helps see how you get to that function in the first place and looking at function after function you begin to develop a feel for what’s going on. Early days yet, but it’s helpful. As I improve documentation in the source code Doxygen’s usefulness can only improve.

The coding style thing is just for my sake, I can get easily confused and I like to remove ambiguity when I can. I’ve looked at some lines of code wondering what the hell is going on only to realize they’ve split up (or not split up) the code in a way I wouldn’t expect. I’ll kinda work on this here and there. Not really that important, and more than likely there’s a tool out there that can automate some of the things I want to do so when I finish up the other side of things, I may look into a tool to do that rather than spending all those man hours. In the mean time, I’ll do it here and there while I am in that area, but that’s about it for now.

Improving the comments in the code is another thing I’d like to do. There are some somewhat useful comments, but overall I have no idea what’s going on. I really like the comments in PrBoom’s source code. They’ve done a good job IMHO and I want mine to be as useful. Some other source ports could take some inspiration from them, just sayin’.

Lastly, the main thing I have been working on is fixing compilation warnings generated by GCC. A good bunch of them were variables set, but not used. These were relatively easy to fix. Next were variables which were not explicitly declared which I have fixed. The remaining errors are of various kinds; we’ll see how I’ll go with them. I don’t think they should be too difficult to fix since they’re warnings and not errors, but I could just be naive :)

Anyways, that’s where things are at for now. Will post more about where things are at in another post. Catch y’all ‘round :)

Compiling the Doom source code

So I was feeling a bit nostalgic so I decided to get my hack on a bit with the doom source code. I haven’t done anything major with it yet, but the first mission was just getting it running. Seeing that the code is over 20 years old at this point, it’s understandable that it’s not going to run out of the box without a bit of tweaking. Thankfully though, it’s not a whole lot. So, if you’re looking to run it, here’s what you’ll need to do.

Firstly, install a Linux distribution. Since this is the Linux source code, and we’re just looking to get it up and running with little effort as possible, we won’t worry about porting it to Windows or OSX for now :P If you try running this on a 64bit distro, you’re gonna have a bad time. I recommend using something like Mate 15.10 in 32bit format.

Once you have Mate installed, to be able to compile Doom and edit the sources you’ll need to run the below command, don’t worry about including vim if you don’t plan on using it.

sudo apt-get install build-essential libx11-dev libxext-dev vim (optional)

Once that’s done we’re going to need to do a couple of things. First things first you’ll need to create a folder called “linux” in the linuxdoom-1-10 folder. This is where the compiled files will be outputted. Once you’ve done that, you’ll need to edit a couple of files.

In “i_video.c” on line 49 you’ll need to correct the filename. It should look like this: “#include ”. While we’re in this file, we’ll need to insert the following line at the 820 mark: “XInstallColormap( X_display, X_cmap );“. This correction I noticed on the GitHub page (and the others are posted there too to be fair which would have saved me some effort trying to figure the other errors I got out on my own lol) and from what the commit says it just forces the color map that’s created to be placed on the screen.

In “i_sound.c” we need to remove line 166 where it says “extern int errno;” and then insert “#include ” near the top where the other headers are.

In the sndserver folder we need create a folder called “linux” again as well and then edit “linux.c” and repeat the above. Find the “extern int errno;” line remove it, and then insert the “#include ” near the top.

You then need to compile the files. Go into the sndserver folder and run “make” and do the same in the “linuxdoom-1-10” folder. Copy the compiled sndserver files into the “linuxdoom-1-10/linux” folder. Once you’ve done that, run “ln -sv /path/to/sndserver /usr/bin/sndserver” or alternatively, modify your path to append the sndserver location. Doom calls this executable as a separate process so it won’t actually find it unless it’s in a location the $PATH variable has listed.

Once we’ve done that, we then need to run a command to open an 8bit doom terminal:

xinit ($which xterm) -- $(which Xephyr) :1 -screen 320x200x8 -br -reset -terminate

This is a very small window as you’ll see. Don’t worry, we’re nearly there. This is a virtual x session run within another x session which thankfully makes this process a lot less complicated than it would have been otherwise. Anyways, we’re almost there. If you’re not already in the doom folder, get to it and then fun the following command “padsp ./linuxxdoom”. padsp is a virtualisation layer for older programs that are based on a sound system architecture no longer used. This will allow us to hear doom in all its glory.

Anyways, wheh that was exhausting eh? Well, at least you’re now running (a buggy) Doom! Now you can start having some real fun hacking the code!

Raspberry Pi 2: Boosting Speeds With Gigabit Ethernet Adapter

Note: For this post I will be measuring in Megabytes(MBs) because we generally measure things in bytes in our day to day affairs and to me it’s more practical and saves you the effort of calculating speeds in megabytes to appreciate real world performances for yourself.

Note 2: Please read the addendum for an update as I found even quicker speeds than reported here.

Introduction

I’ve had a Pi for awhile now and earlier this year I upgraded to the Raspberry Pi 2 and was hoping that the network speed would have hopefully improved a tiny bit thanks to a faster and multi-core processor but I unfortunately did not. After some research into methods of improving the speeds, I came across this article. Through the use of a USB3 to Gigabit ethernet adapter, increased his theoretical network speed from 11.8MBs to 27.8MBs.

He used a tool called iperf, which is available on most linux distributions, and you can install it on OSX as well through hombrew, and probably other methods too. Iperf theoretical speeds are not necessarily close to real world values because it doesn’t take into account bottlenecks that can slow those speeds down (in the case of the Pi, the fact that the USB and ethernet busses are shared means you won’t get all of that bandwidth in actual use).

Testing

So, after reading this article I decided to test this out on my own, and I purchased a similar adapter for my Raspberry Pi 2. It’s made by a company called Volans and I noticed it in a few computer shops here in Sydney. You can find it on their page here. It works well, and I highly recommend it. It’s about the same price as the adapter above and it worked straight out of the box with both my Pi, and my macbook pro. The adapter he mentions (which I also bought) required drivers for my macbook pro.

For the testing, I focused mainly on large file transfers since for me personally most of my regular file transfers are of big files such as ISOs (since I play around with a lot of distributions I download them straight to my Pi and when needed transfer them to my macbook for when I decide to try ‘em out). I tried 3 different files (an arch iso, a slackware iso and a fedora iso) each one being roughly double the size of the one before it. The file transfer method itself was through the use of rsync.

Performance

I actually had to fiddle around to really boost my speeds, but that was more about how I had my wireless router set up, than the actual Pi itself. I’ll actually make a seperate post about this process as I was quite surprised at the performance improvements I experienced before I even got to the switch to the USB3 to Gigabit adapter. Anyways, so how much of an improvement did I personally get from switching to a USB3 to Gigabit adapter on my Pi? I’ll let the chart speak for itself:

"chart of speeds"

So, as you can see in this chart my speeds increased from an average of 8.37MBs per second, to a whopping 12.93MBs per second. This means that though my theoretical speeds more than trippled, in terms of real world performance, they only increased by about 54.48% which results in the fact that my former speeds are about 2/3rds of my current ones.

You’ll notice that if I go gigabit to gigabit, while I do get more of an increase still, it’s only about another 1.4MB per second meaning most of the improvement comes from the use of the adapter itself. While I wouldn’t mind that extra 1.4MBs, it’s not really practical as I use my laptop around the house and I’d be a bit limited if I were to rely on having to use an ethernet cable (plus I’d have to have a really long cable too :P).

Conclusion

One day when gigabit ethernet comes to the Raspberry Pi, and it’s not attached to the same bus as the USB, we might get to experience some really nice speeds but until then I can conclude that if you want to squeeze the most network performance you can out of your Raspberry Pi, you can’t go wrong with a USB3 to Gigabit adapter. The speed performance is real, and in my case it has decreased transfer times by about a 3rd.

Addendum

After posting this, I tried transfering files through the GUI interface rather than through the use of rsync (which I should have realised would have been slower due to the encryption via ssh) and I got substantially quicker speeds. In fact, I added another 4MBs per second when going wireless ac to gigabit. In other words, 18.1MBs per second, versus my initial 8.37MBs. This is more than double my initial transfer speeds. An amazing improvement and only makes me recommend purchasing one of these adapters even more.

You Know You're an Intermediate When..

When have you moved from being a beginner to an intermediate+ linux user? Simply put, it mostly centers on the command line. The more comfortable you are with the terminal the more you know actual Linux knowledge that’s useful and transferable across distros.

Below is hopefully a helpful checklist to see how far along you are in the scheme of things. If you are confident about all of these, you can safely say you’re in the intermediate zone, but like with all things you still have plenty left to learn!

These are roughly in chronological order of how you’d expect to learn and get familiar with these things but if you’re anything like me you often cover multiple bases at once at varying degrees of proficiency so don’t be too surprised if you bounce around a bit in terms of what you’re comfortable with below.

Now for the list:

  • You know how to move around in the command line (moving and copying files, changing folders and reading config files).

  • You are familiar with how to use at least one command line editor (vim/emacs/nano).

  • You know about and appreciate the use of su/sudo when needed.

  • You know how to update your system via the command line, and if for some reason you need to edit a file to enable a repo or select a better mirror you know how to do it.

  • You know what services are and you know how to start, stop, restart or check the status of a service running on your system.

  • You aren’t immediately turned off by the prospect of using a command line tool instead of a GUI based one.

  • You’ve gotten pretty good at googling your problems and fixing them, and failing that knowing how to provide the helpful information while asking for help online.

  • You’re familiar with how to partition your hard drive and how to configure a file system on it from the command line.

  • Editing config files like fstab or sshd_config don’t fill you with panic.

  • You learn about, and start finding ways to use various command line tools (sed, grep, cat and so on).

  • You find yourself writing simple shell scripts to speed things up for yourself.

  • You’re able to get up and running an intermediate level distro without much issue (Arch, Gentoo, Crux and so on) and the only thing that would put you off running it if anything is the extra maintanance required over something that “just works”.

  • You can compile yourself a kernel, and boot into it and still have a working system.

  • You’ve managed to install Linux From Scratch and feel pretty chuffed with yourself.

  • If a binary isn’t available for your distribution, you at least give it a crack trying to compile it yourself.

Create a Bootable USB on OSX

Introduction

There is a guide available on the ubuntu site for how to create a bootable USB on OSX which you can find here. I decided to automate the process a bit by creating the following script. You can download it here or provide suggestions if you’d like. It’s a rather simple script, but I found that it simplifies the process greatly and safes me the paranoia of wondering if I mistyped a command.

To save you the hassle of going to the gist site however, I’ve pasted it here below. Hope you find it useful!


# iso2usb is a simple script for OSX to ease the conversion of an iso to an img file
# and then dd that img file to a USB of your choosing. You simply give it the iso
# as a parameter and then the disk number and it will take care of the rest.

# based on the commands here: http://www.ubuntu.com/download/desktop/create-a-usb-stick-on-mac-osx
# and the color commands here:  http://stackoverflow.com/questions/5947742/how-to-change-the-output-color-of-echo-in-linux

# exits out of the script upon error
set -e

# colors

# feel free to replace the color choices
# I made below with any of the following

BLACK='\033[0;30m'     
DGRAY='\033[1;30m'
RED='\033[0;31m'     
LRED='\033[1;31m'
GREEN='\033[0;32m'     
LGREEN='\033[1;32m'
ORANGE='\033[0;33m'     
YELLOW='\033[1;33m'
BLUE='\033[0;34m'     
LBLUE='\033[1;34m'
PURPLE='\033[0;35m'     
LPURPLE='\033[1;35m'
CYAN='\033[0;36m'     
LCYAN='\033[1;36m'
LGRAY='\033[0;37m'     
WHITE='\033[1;37m'
NC='\033[0m'

# beginning of actual script

if [ ! -z $1 ]; then
    printf "\nNOTE: If you do not see ${GREEN}## \
COMPLETED SUCCESSFULLY ##${NC} prior to the script \
exitingthen something has gone wrong and you will \
need to investigate further.\n"
    printf "\n${RED}DO NOT ASSUME THAT THE USB IS USABLE!${NC}\n\n"
    printf "${GREEN}## FILE CONVERSION ##${NC}\n\n"
    img=$(echo $1 | sed -e 's/iso/img/g')
    printf "original: ${CYAN}$1${NC}\n" 
    printf "new file: ${PURPLE}$img${NC}\n"
    read -p "proceed (y/n): " ans
    if [ "$ans" == "y" ]; then  
        hdiutil convert -format UDRW -o $img $1
        mv $img.dmg $img
    else
        printf "exiting script..\n"
        exit 0
    fi
    printf "\n${GREEN}## EJECT USB ##${NC}\n\n"
    diskutil list
    echo ""
    read -p "which disk is being ejected - enter number only: " usrinp
    printf "eject ${GREEN}/dev/disk$usrinp${NC} and img (y/n): " 
    read ans
    if [ "$ans" == "y" ]; then
        diskutil unmountDisk "/dev/disk$usrinp"
        printf "\n${GREEN}## IMAGING USB ##${NC}\n\n"
        printf "If requested, enter password to begin imaging..\n"
        sudo dd if="$img" of="/dev/rdisk$usrinp" bs=1m
        printf "\n${GREEN}## COMPLETED SUCCESSFULLY ##${NC}\n\n"
    else
        printf "exiting script..\n"
        exit 0
    fi  
else
    printf "${RED}ERROR:${NC} No file name given\n"
fi 

A Gentoo Review

Gentoo is a distribution which I have wanted to try for a long time and have only just in the last few days had a chance to play around with it and more importantly, get a system up and running! :) One of the main reasons I want to do a review as there aren’t really that many around, and the ones that are around seem to have no idea what it’s actually like to use Gentoo as a system installed from scratch and set up and configured to one’s likings. Further a lot of the same things seem to be said about it which are evidence once again of someone not being intimately familiar with it, but rather are repeating what they’ve heard in the echo chamber that is the internet.

One of the main reasons that Gentoo is I guess hard to review is because it’s not a regular desktop Linux distro. There is no “look” to it, nothing that visually defines it. It’s all under the hood, and as such I understand why it, and distros like it (for example Arch) seem to get a lot less press, especially on sites like distrowatch because there’s nothing “representative” to show. It’s all about how it runs at its core which is generally less interesting for most people out there. It was inspired by the BSDs and you can see that a lot in how it’s set up and run. In terms of its target audience, it would be power users and intermediate+ Linux users.

What is Gentoo

Gentoo is a rolling release distribution[1] that installs new software through emerge (emerge can be likened to apt-get, pacman and yum in that it’s used for installing, removing and updating software). Where it differs from other distributions is that it does not install binary versions of its packages, but rather downloads the software you desire to install and compiles it on your system according to its ebuild. This puts it in a fairly unique position in the Linux world, but there are 2 distributions that I believe are similar and I’ll cover those in a moment.

My Setup

I ran Gentoo in a VM in VirtualBox and had a fairly vanilla install which had me installing the full KDE desktop, vim (really disappointed that nano was chosen over vim, the fanboy in me died a little bit) and a few other minor apps. The only real drastic thing I had done (if you can call it that) is change my version of GCC from the default 4.8.5 to 4.9.3 because the xorg drivers for virtualbox wouldn’t compile otherwise. I made this change at the beginning before compiling any other software in order to ensure that I kept things as simple as possible.

Using Gentoo day to day was not really a problem aside from 2 things: First, the above mentioned virtualbox drivers, and 2 the fact that I couldn’t get audio to work until I started from scratch. This I believe was a use flags issue as I had initially not set up my profile to a desktop one and despite fiddling around with numerous settings and trying various things - I couldn’t get it to work.

Aside from those initial birthing pains, updating the system and running it day to day has been fine. I’ve experienced no bugs or other signs of system instability. Overall everything was working smoothly and it left me with plenty of time to explore the Gentoo world and begin to find out how it ticks and how to make the most of it. From start to finish, the set up took about 10hrs of compiling software plus various other administration. Maybe a bit less, and definitely a lot more if I include Chrome into my time factoring. Ultimately, don’t expect to compile have have a desktop system up and running in Gentoo in anything less than I’d say about 3-5hrs, depending on how fast your system is.

That all said, I’ll begin my review by going back to the comparisons I mentioned earlier. So, the 2 distributions that would match closest to Gentoo in my opinion would be Arch and Linux From Scratch and I’ll briefly compare the 2 against Gentoo now.

Gentoo and Arch

Gentoo and Arch are fairly similar in that they’re both “work your way up from the bottom” distros and setting up everything is done from the terminal. Both are also rolling release which means you are always on the bleeding edge and both of course feature a package manager of sorts which handles dependencies and basically makes your life generally easier to keep up to date and install things without having to worry about chasing down what you may or may not need.

The differences are that Gentoo is source code based, not binary. In other words when you want to install vim, you download it, and any potential dependencies and using ebuild scripts it compiles the software with any USE tags that you’ve set up. I’ll cover these more in a moment, but the ebuild scripts are very similar in concept to PKGBUILD scripts found in the AUR which performs a similar function of downloading and compiling software and performing dependency checks.

Gentoo and LFS

Linux From Scratch is even more of a “meta” distribution than Gentoo. It basically teaches you how to build a minimal Linux system from scratch, hence the name. There is no official package manager, and everything that you do is done manually. When you finish the LFS book, and if the software you want to install is not in the BLFS handbook either, you have to figure how to get it installed and working yourself.

The similarity between it and Gentoo however are they both allow great customization on what software you want to install, and what features you wish to compile in and not and as a result of being sources based, they require a lot more manual input and intervention than even Arch requires of you. I almost think of LFS as Gentoo without the training wheels and Gentoo explores a lot of concepts on a bit of a higher level than if you were actually compiling software yourself, but this exposure definitely helps in the long run if you need to compile software yourself in the future, or if you wish to give LFS a bit of a test drive.

Pros and Cons of Gentoo

So, after having installed Gentoo and played with it for about a week I can give you my initial impression but will do so with the following disclaimer: Using a system for a week is not the same as using it for a month or a year, the more time you spend in a distribution, the more you learn about its quirks, what makes it tick and allows you to adjust to its way of doing things. In other words, take what I say with a grain of salt.

Pros

  • USE flags: The amount of granular control that you have means that you can enable or disable any features that you so desire in a package. I’ve found that using equery helps finding out what use flags a package has and what they do. This is very helpful getting a package down to the raw essentials you need if that’s what you’re after. A helpful addition are profiles. Setting up one of these helps reduce the burden of selecting all the right USE flags for things like a KDE or GNOME desktop, or simply if you want to have a hardened system. USE flags are also a great simple introduction to configuration of source packages if one ever needs to manually compile software, it’s an abstraction layer that works well.
  • emerge: emerge is the bomb. There are probably many things to like about it, but for me I like how I can put commonly used options with it into make.conf and not have to worry about having to do that extra typing every time, and it just feels cleaner than creating an alias for that command. Another feature I like is parallel fetching and how you can install more than one package at a time. This really cuts down on the idle time. Great feature, wish to see it in other package managers *ahem, pacman*. I only wish the download progress when such features are enabled were shown in the same screen, but it’s a small price to pay.
  • General configuration: I like the general configuration in Gentoo for simplyfying one’s use of emerge and how straight forward it is to use OpenRC to manage system services. While I definitely like systemd (an opinion that seems unpopular in the Linux world) it is nice to have a system that is more user friendly and familiar to those used to init systems. Since you are using the command line all the time in a system like this, I feel like Gentoo has done a real good job of making this easier on the user to do what they need to do.
  • You learn Linux: I didn’t come to Gentoo as a noob, I’ve used my share of distros and have been using Linux on and off now for quite a few years but I know that really learning Linux has become a bit “harder” now than it used to be. The first distro I really started playing with was Ubuntu 4.10 and back then, even though it was a new user friendly distro, you still had to hit the command line to edit things like which repos you had access to and you had to update the system from there too. You still got exposure to the command line, which is in essence what Linux is all about. With the push to GUI based tools in most popular distros, it means less knowledge is transferable when you do decide to switch ship and makes it harder for you to fix things yourself when the time comes. Distros like Gentoo and Arch are becoming more and more fringe nowadays :)

Cons

  • USE flags: Yep, this gets featured twice. While the customization options are nice, and there are features like profiles which help simplify an obviously complicated system, it’s still something difficult to keep on top of. While there are probably numerous examples, I’ll use one that I encountered myself. When looking to use Chrome during my first install, I noticed that to install it, I also had to re/install a bunch of packages to go along with it. A bit of a headache really considering Chrome takes a couple hours to compile, just by itself. When setting it up for the second time, I came across a youtube video which mentioned adding icu to the list of global flags at the outset helps when installing it later on and that turned out to be right. Knowing which flags to set to save you hassle further down the line seems like something that requires a lot of research and a trial and error.
  • Packages: Packages face the same problem and like the above, I would add that admittedly, once you have a system set up you probably won’t modify it too much but with Gentoo it does become a nuisance when you do want to install new programs and then find out after the fact that a new program requires a bunch of others to be recompiled as a result. Along side this, trying to go for a minimal install can and does often end up biting you. From the few videos and blogs that I came across about Gentoo I noticed there seemed to be a theme where people went for a full install rather than the minimal version because they didn’t want to have to worry about conflicts arising later on. Like USE flags though, you might not always know what you need up front so the “better safe than sorry approach” seems to be applied. The irony of this however is that for all that customisation that you have at your finger-tips, you end up sacrificing it because trying to optimize “too much” becomes too much of a headache, so it makes me wonder if the level of “safe customization” really makes it any better than say something like Arch.
  • Setup time: It really sucks how much time it takes to compile, and it seems a bit frustrating that you have to organise your life around system updates (i.e. leaving your PC on overnight or having to plan ahead updates). Furthermore, if you have something go wrong and you try to fix the issue by make some changes that require recompilation or the downloading of new packages - it could take hours for you to find out if this fixed the problem or not and it could turn out to be time utterly wasted. I came across this many times when searching through the Gentoo forums.
  • Rolling release “light”: It doesn’t seem that all packages are at the most latest versions, for example I was surprised to see the default version of GCC is 4.8.5 instead of 5.2 like in Arch, another rolling release distro. Another example would be Plasma 4 vs 5 (which again is available in Arch). Now while I understand that they’re still working out the hinks in the system and that’s why they’re not using it by default, I do recall one of the leading arguments back in the day was that it was bleeding edge. This, along with the ability to customize and the supposed performance gains meant that it was a very attractive offering to power users. Given current hardware performance, building from source in my experience doesn’t really make too much of a difference nowadays (except in edge use cases perhaps) and since not all packages are the most recent, the only thing still going for it I guess is that it’s really customizable.

Final Thoughs

I like Gentoo. I really do, and in fact I plan on installing it on my desktop after I’ve played around with it enough in my VM trying out various things and getting a good feel for how to go about certain things in a safe manner (such as switching profiles from KDE to GNOME and not having that ruin the system - might be able to do that safely but I want to spend a bit of timing looking into it and trying it out). The thing is though, it’s one of those distros I can tell will be hard to use on a long term basis.

The reason for that is simple, on a distribution like Ubuntu, Suse or Fedora you have a unified theme and experience. Generally everything looks beautiful and things tend to not stick out like they don’t belong. This level of care and attention is not found in a distro like Gentoo. You have to put the work in yourself to get a nice looking distro with a pleasant user experience. When you combine this with the fact that the hassle to maintain a system like this is far above and beyond the average user distro, you eventually move onto something that just stays out of your way while you get to business doing other things.

For that reason, I don’t think that most of the people who try Gentoo will stick with it in the long run. I think it’s a good distro to help you get more familiar with the command line and learning how to maintain a Linux system, but overall probably one that will become too much of a headache for most. So, my final thought is if you’re new to Linux (but have had some time to try a few distros out and aren’t too intimidated by thec command line) you should probably give Gentoo a try for a couple months. I think you’ll get a real kick out of the experience and learn a lot.

Gentoo too full on?

If you’re new to Linux or at least still think of yourself as in the beginner range, I strongly suggest trying Arch. I think Arch while having its own weaknesses compared to Gentoo, has a lot of strengths that I think put it in an overall superior position if you’re looking for a permanent home. It’s binary based, and while still very hands on you will have a running system much faster than you would with Gentoo.

Best of luck!

Misc

[1] Though I could not find any explicit statement mentioning that they’re rolling release, they are most definitely a rolling release distribution as noted in their FAQ here, they state “Gentoo’s packages are usually updated shortly after the upstream authors release new code”. Further, they are often described as a meta distribution, and rightly so but for simplicity I will just refer to them as a distribution in this post.

Welcome

Welcome to my new site, more news to follow in the coming days!