Monday, 29 July 2019

4 years of constant mob programming

I started my first IT-company in 1997. Ever since, I’ve been working as a developer or an architect, building software. There has been many ups and downs, highs and lows, in my career. But all in all, I would say I’ve been coding professionally for 22 years now. The last 4 years I’ve spent doing mob programming, almost exclusively, 8 hours a day. With a short exception of one company where I first spent 4 months trying to set the stage for the team to mob program, before it finally happened.

It has changed the way I look at developing. In short; it has helped me stop trying to be the most clever developer and instead focus on helping the team build the software our customer needs. The reward in a joint successful effort is so much larger than the reward in admiring your own clever code.

So: Why on earth would we take a group of expensive developers and put them in front of a single computer to mob program?

Well: because software. During my years of developing software I’ve run into a multitude of issues. We tend to write complicated code even when not needed. We write bad code, maybe blaming deadlines and meaning to go back and fix it later. Or maybe because we have no code reviews. Or even when we have code reviews, they’re often done in a micro perspective, not seeing the whole picture.

We have dependencies to specific people; only Brent the Hero can understand and fix this. Or at least Brent has to look at it and tell the rest of us what to do. And approve it later. We are specialized; finding the task on the board that we know well, because it’s so much quicker if Joe does the CSS and Jenny does the database optimization.

And for some reason, we’re often under time pressure, which narrows our focus, makes us do less research and consider less options. Oh, and let’s skip the unit tests, just this once. Someone made an estimate and it magically turned into a deadline.

Let’s not forget the things outside the actual coding. The prioritization issues, where everything is equally important, forcing development of multiple features in parallel in the same codebase. Important information that is not reaching the people who needs it. The confusion of requirements in an agile world; too vague because of fear of doing waterfall, or too specific and time wasted because they’re not possible to implement that way.

Long testing periods, where feedback comes in much too late. Is anyone using the Acceptance Test Environment this week? Oh, we’ll wait then. And where should bugs be reported, as a subtask of the user story or on a specific bug board? Maybe we need an agile coach who can tell us we need a definition of done, or a Jira consultant to help us manage our agile tool?

And very importantly; don’t release on a Friday, because we don’t trust our software. Or wait, let’s make that Thursday to be sure.

So software development has issues with delivery and quality. Not really news. One way to come to terms with this was Extreme Programming in the late 90’s. The methodology is about taking practices that work well and applying them to an extreme level. For instance; since code reviews are a good practice, why not do them all the time by pair programming?

XP also advocate things as code simplicity, TDD, working tightly with customers and within the team, and not developing features until they are needed, thereby knowing more and understanding how and what to build.

XP emphasizes teamwork and has five core values; Communication, Simplicity, Feedback, Courage and Respect. The beauty of this is that these values stress the fact that the process of building software is not limited to the craft of designing the system or writing the code. The team creating the software consists of managers, testers, UX, developers, customers with open communication between these roles being crucial.

Still though, 20 years after the excellent book by @KentBeck on Extreme Programming came out, we have the same issues. How can we learn to work better together to deliver great software?

Mob programming

@WoodyZuill expresses the essence of mob programming brilliantly: “All the brilliant people,
working on the same thing,
at the same time,
in the same space,
and on the same computer”
This means all the developers around one computer, one person at the keyboard, using a timer to rotate. Everyone does everything. If a task is about CSS and you’re the database girl, you still take your turn at the keyboard. The method is similar to pair programming where there is one driver at the keyboard and one navigator on the side. Here though, we have multiple navigators.

The navigators constantly discuss design and code, review code as it’s written, google when the team runs into problems. If the team is building a public web site, the feature can be tested on different devices while being built. As in the metaphor of driver and navigator, the driver follows the navigators’ instructions. If you don’t know CSS and what to write, it doesn’t matter. The team helps you forward.

In my experience, when this works, it is career changing and an eye opener to how important it is to work closely together as a team – product owners, UX, testers and developers - to achieve flow, joint ownership and pride of what’s being built.

Better quality

This aspect of mob programming is usually quite easy to achieve. More eyes, minds, diversity of thoughts and experience will make the team discuss and think about features and coding in a new way. The joint ownership of the code encourages refactoring and solutions that the team feel proud of.
  • The team programs, analyzes, designs and tests at the same time.
  • There’s more focus on creating simpler solutions that will be maintainable for years to come.
  • Courage, we dare change code and ideas when needed.
  • Thought through on all levels, from CSS to deploy scripts.
  • Constant learning; technology, domain, best practices, keyboard shortcuts.
  • Knowledge sharing, everybody does everything.
  • With more eyes on the code, duplication and dead code is more easily discovered.

Better delivery

We often talk about high performing teams, meaning teams that are good at producing value. In a world where time to market can be very crucial for a company, being able to deliver often and with confidence is essential. One huge benefit of mob programming is the short lead times and flow it can create:
  • With a working Continuous Delivery pipeline ->
  • When everyone works on the same feature ->
  • When the code being checked in is already reviewed and the feature is tested ->
  • When there are no merge issues ->
  • Then a feature can go from idea to production in the shortest possible time.
I’ve been at places where the mob has delivered a new feature every third or fourth hour; read the requirements, developed the feature, tested on mobile devices, got OK from UX and PO and pushed to production. It’s a powerful concept and rewarding to get that feeling of a job well done many times a day. ­čśŐ

It takes time

Getting mob programming to work is not something that’s done in a day. It’s exhausting. It’s a whole other level of focusing, verbalizing your ideas and listening to other people verbalizing theirs. You don’t take all those micro pauses you normally do. You don’t get stuck reading email, or browsing the web for weather or news. The mob keeps on, all day, like a bulldozer. The first weeks, you will feel drained.

But it’s not all about the developers. The process surrounding development needs to catch up with this way of working. When requirements are vague and only in writing, the mob can be very inefficient. Everyone, not just the developers, has to be more in the present, ready to discuss the details of the current or next task, be there for questions during development and approve it when it‘s done.

There’s also the need to find some structure and routines for the daily work. People get into work and leave at different hours, how will this be handled? What type of sync meeting do we need in the morning? Focus should be about what will be done during the day and who needs to be available. Clear goals for each day and feature is a must. To make sure everyone in the team, PO, UX, testers and the mob understands the feature, a short session at a whiteboard before starting a task is really useful. Not to set the details, but to make sure everyone knows what the team expects to deliver.

It tests the team

All issues you normally have in a team gets multiplied in a mob. If you are in a team where you still sit and work by yourselves, you can avoid interacting with team members you don’t go well with. Here, you don’t have that possibility. When there’s issues surrounding trust, having a common goal, commitment, ego, status, ownership around the code, they will affect the mob much more than they will a regular team.

It’s all about mindset

  • It’s not about making your own voice heard.
  • It’s not about getting your own solution done.
  • It’s about building an environment where the unique knowledge and competence of everyone involved is recognized and utilized to deliver the best software for the user.
  • Stop being the best developer. Start being the one helping the team learn and move forward.

It’s all about learning

Mob programming is constant learning and sharing. If your suggested solution isn’t the one the team tries, be open about the other solution. Make sure everyone’s voice is heard and encouraged and view all suggestions as a basis from where you can build further. Each step in refining your code is a step towards learning more.
  • Together we become the perfect full stack developer. See the possibility in learning CSS, or the database, or the other things that you didn’t think you would need to know.
  • In sharing your knowledge with others, the team’s collective competence is raised. This allows individuals to sharpen their skills within special areas even further.
  • No more person dependencies. Everyone does everything. No one has to work from home when sick because all tasks are handled by the mob.
  • There is plenty of room for innovation in a mob. When the mob has a hard problem, we solve it. When we have an easy problem, we innovate or automate.
  • We learn by discussing and trying things. No more meetings where the team has to agree on patterns, naming standards, or someone needs to explain to the rest about this new clever solution that has been implemented.
  • A new team member will be productive from day one and quickly find her footing in the codebase with the help of the mob.

Quality is King, but Flow is King Kong

In my opinion, having flow, where the product owner is close by, questions can be asked and answered when they arise, and most importantly; tasks get done, tested and closed, is the key to almost everything else. When we can check things off the list, feel good about what and how they were delivered, so many other issues never arise. The fewer things we have to keep in our heads, the better we can handle the things we actually do have to think about.

I have been in mobs where this hasn’t been the case. The team has still had a huge backlog, never succeeding in finishing sprints, failing quality and a long list of bugs. Not working closely to the PO means that understanding and trust doesn’t get built, and the development process never adjusts to the mob. My biggest takeaway from mob programming is probably this; realizing how important this part of software development is.
  • Everyone needed must be available and focused on the current and immediate features; PO, UX, Test.
  • Since the team is working closely with the PO, there will soon be a common understanding on how big different items in the backlog are.
  • The backlog needs maximum 5 prioritized and groomed items, ready to start working with. These should be revised every day. Still the most important = go on. Otherwise = move down.
  • After development, meaning setting the requirements, developing, testing, feedback and refactoring, the item should be done. Preferably, testing is done during the development, or together in the mob and next task is not started until the previous item is done.
  • Mob programming can be the driver to change the development process for flow, by forcing work on one feature at the time. If the people needed aren’t available, the mob and therefore all of development grinds to a halt.

And the tricky stuff

It’s not all easy breezy though. I’ve been in mobs where 3 out of 6 developers left the team because mob programming didn’t work for them. I’ve heard many comments about mob programming only being for people who can’t understand the code themselves. I’ve had teams next to us loudly declare that the management were idiots for letting us work in this inefficient and embarrassing way. So there are definitely some tricky bits I’ve run into during these 4 years.
  • Mob programming has little or no status among many developers.
  • It’s hard to handle a team member who doesn’t want to mob. As soon as the team is divided, lots of benefits of mob programming disappears.
  • When management sees a successful mob programming team, they tend to push this method on other teams. Like so many other things, it has to be a decision from within the team. Mob programming is a huge change in working that needs to be taken seriously.
  • If requirements aren’t clear, or no one can answer questions directly, the mob becomes very inefficient. Not having the right foundation for mob programming is a sure recipe for failure.
  • Combining mob programming with traditional Scrum, using estimates beforehand to compose sprints, is not a good match. The power of the mob is the constant delivery and flow.

Some final advice

  • Continuously review and improve the mob. Remember that mob programming is not the goal. It’s a way to achieve the goal: How can we work better together?
  • Don’t set out to build a team that mob programs. Try instead to build a climate where cooperation can thrive. A place where anyone, anytime, can say “I don’t understand” without being sneered at.
  • Make sure everyone’s voice is heard and appreciated.

Thursday, 15 March 2018

Build a Raspberry Pi Musicbox using Nodejs, Docker and Mopidy

The goal of this blog post is to set up a Raspberry Pi in a nice box with a button, that when pressed plays a song from a Spotify list on a bluetooth speaker. The songs name and a link to it will be posted on Twitter. The code will be written in Nodejs and deployed using Docker. The Docker container needs to be restarted on reboot, as does the Mopidy server that handles the music.

Items needed:
  • A Raspberry Pi, model 3 with wifi and bluetooth
  • A micro SD card, 16GB
  • A micro USB cable and adapter
  • A bluetooth speaker
  • An arcade button
  • A LED sequin to insert into the button
  • Three jumper wires with a male end
  • Optional extra wire to extend the jumper wires
  • Shrink tube to cover solder joints
  • A nice box to keep the Pi in

Set up a new Raspberry Pi

  • Download the Raspbian Stretch image from https://downloads.raspberrypi.org/raspbian_latest
  • Install the image on your SD-card. I use Etcher to do that. Insert your SD card, or your SD card adapter into your computer. Open Etcher, select the downloaded zip-file and click Flash.
  • Insert the SD card into the Raspberry Pi and connect it to power, a screen, a mouse and a keyboard. Raspbian will boot up and display the desktop.
While on the desktop, it's time to do some initial configuration.
  • Open the Raspberry menu in the top left corner, go to Preferences >> Raspberry Pi Configuration
  • On the System-tab, change the password for the user. The default user 'pi' has root access and even if you only intend to use it on your home network it's a good practice to change this from the default password 'raspberry'.
  • On the System-tab, optionally change your hostname from the default 'raspberrypi'. For this one, my tenth on the network, I'm going with raspi10. :)
  • On the System-tab, check the box "Wait for network". This will make the Pi wait for the network to be available before starting up any of your services.
  • On the System-tab, click Set resolution and change the value to the one you want to use when logging in from a remote computer. I use 1920x1080 which looks good on my Macbook.
  • On the Interfaces-tab, enable VNC for remote desktop connection and SSH for remote ssh connection.
After this configuration, the Pi will need to reboot. When this is done, connect the Pi to your home network by clicking the network icon in the top right corner and selecting the network.

Pair the Pi to your bluetooth speaker

In the top right corner, click the bluetooth symbol. Select 'Add device'. Make sure your speaker is turned on and find it in the list of devices. After the pairing is done, right click the speaker symbol in the top right corner. Select your speaker.

This is the easy setup that will make sure that your Pi will connect to the bluetooth speaker on startup, if the bluetooth speaker is turned on and in range. There are some issues with bluetooth speaker connections on the Pi. My solution for now is to just turn the speaker on and reboot the Pi. :)

When this is done, you can disconnect the Pi from the external screen, mouse and keyboard and just plug it in somewhere. It is now accessible from your computer and that's how we'll do the rest of the setup.

Connect using SSH

You can connect to your Pi using SSH in your command prompt or terminal. I use ITerm on Mac and Cmder on Windows. Use the hostname of your Pi, in my case raspi10.

ssh pi@raspi10

This will prompt you for your password. Enter it and you will be logged on to the Pi, with the user directory /home/pi as your current working directory.

Connect using VNC

Another way to connect is by remote desktop. This way you will see the graphical Pi desktop on your own computer. This is slower, but quite nice if you're not used to working in the command prompt. For remote desktop, Real VNC is used. The software VNC Connect is included with Raspbian. There is a VNC server installed, which allows you to connect remotely to your Pi, and also a VNC Viewer, which allows you to connect to other desktops from your Pi. To connect to your Pi, you also need a VNC Viewer installed on your computer.

To enable and use VNC, logon to your Pi using your command prompt or terminal. Now run the Linux package manager apt-get to first update the list of available packages, then install any new versions found. Always run apt-get update before installing to make sure you get the latest version of packages.

sudo apt-get update
sudo apt-get install realvnc-vnc-server realvnc-vnc-viewer

When this is done, download and install VNC Viewer on the computer from which you want to connect to the Pi: https://www.realvnc.com/en/connect/download/viewer/. Open it and enter the hostname of your Pi to connect to it. You will be prompted for username and password. If the window where the desktop is shown is too small, go into the Raspberry Pi Configuration again and set the resolution on the System-tab.

Install Docker

Docker is a great way to package and run your code in a container, and since it works perfectly on a Pi, let's install it and use it for our application. Connect to your Pi using SSH and run the following command to install Docker:

curl -sSL https://test.docker.com | sh

This will actually install the test/edge-version of Docker (18.03), which is the latest code. The reason I use this instead of the stable version (17.12) is that I want Docker to restart the container after I've unplugged the Pi and plug it in again. This feature does not work in the latest stable version.

Now, run the following command to make sure your user can access the Docker client:

sudo usermod pi -aG docker

You need to logout and login again for the user changes to take effect. Logging out is done by writing exit in the terminal.

Test your Docker installation by pulling down the Hello World-image for Raspberry Pi

docker run --rm armhf/hello-world

This will pull down the image, start a container and print the text 'Hello from Docker on armhf!' in the terminal. The --rm flag removes the container automatically after it's been run.

To remove the image you just pulled down, write

docker images

This will display a list of images on your Pi. Find the Image Id-column and use that value to write

docker rmi your_image_id

This will remove the image from the Pi. Unless you have tons of images, it's enough to use the first two characters of the image or container id when referencing them in the Docker client.

Install Mopidy

Mopidy is a music server that can run on the Pi and play music from many different sources. The default is music from local disc, but there are many extensions for playing music from Spotify, SoundCloud and others. I'm gonna use Mopidy to play music from one of my Spotify playlists. To use Spotify like this, you need a Spotify Premium account.

In the terminal, first add Mopidy to the apt package sources:

wget -q -O - https://apt.mopidy.com/mopidy.gpg | sudo apt-key add -
sudo wget -q -O /etc/apt/sources.list.d/mopidy.list https://apt.mopidy.com/jessie.list

Then install Mopidy and the Spotify extension using apt-get:

sudo apt-get update
sudo apt-get install mopidy
sudo apt-get install mopidy-spotify

Now we need to configure Mopidy, and make sure it runs on startup. We start by editing the config-file for Mopidy. Here you will need your Spotify user, password, client key and client secret. To get these, you need to go to developer.spotify.com, login and create a new app. After registering it, you will get the tokens needed.

sudo nano /home/pi/.config/mopidy

In the config file, uncomment and fill in the following values in the Spotify section:

[spotify]
enabled = true
username = your_user_name
password = your_password
client_id = your_client_id
client_secret = your_client_secret

Also uncomment and change the hostname in the http section to allow connections on all ip addresses.

[http]
hostname = 0.0.0.0

Lastly, uncomment and change the audio output. [audio]
output = alsasink

Exit and save your changes.

To have Mopidy run at startup, we edit the rc.local file.

sudo nano /etc/rc.local

In the file, before the line exit 0, add the path and executable command for starting Mopidy. We need to run it as the user pi, since rc.local otherwise runs as root and will miss the config and user rights we need.

sudo su pi -c /usr/bin/mopidy

Exit and save your changes. Now reboot the Pi. We're done with the software installation of it, now we need to wire it up to the button and write the Nodejs code for playing music and sending tweets! :)

Fix the wires

This project needs three wires with a male connector in one end. This end should connect to the Pi's pins. The other ends will be soldered onto the button and LED sequins connectors. I want my wires quite long so I can open the box without being afraid I'll disconnect things. So I'll extend the jumper wires by soldering them to an extra piece of wire.

Cut off one of the ends of the male jumper wires. Peel off 5 mm. If you're not happy with the length of the wire, cut an extra piece of wire that you peel off in both ends. Solder the wires together and cover the solder joint with shrink tube. I just use my hair dryer to heat it up so it shrinks and covers the joint tightly.
Cut an extra piece of wire, around 6 centimetres, to connect the LED sequin's ground connector to the button's ground connector. That way they can share the ground pin on the Pi and we get one wire less to handle. Peel off 5 mm in both ends.

Make the button pretty

If you already have a button with a LED in it, you don't have to do this. My button is without a LED, but that can easily be solved using a LED sequin. These are normally used in wearables, but they work fine in all types of projects. They're really small and easy to connect. Start with soldering one of the long wires to the positive connector on the sequin. This will be connected to a GPIO pin on the Pi. The GPIO pin will be programmatically set to high or low. Then solder the short wire to the sequins negative connector. This will be connected to ground on the Pi, via the buttons ground connector, and that will close the circuit. When this is in place, the LED will be turned on when the GPIO pin is high and turned off when the GPIO pin is low.

Always test your connections before moving on. Just plug in the Pi and hold the positive wire against a 3V pin and the negative wire against a ground pin. The LED should light up.

Open the button using a small screwdriver. Glue the sequin to the inner activator, pull out the wires through the opening in the sides and push the button closed again.
If you have a button with a build in LED, the same rule applies. Solder the long wire to the LED's positive connector and the short to the negative.

Connect the button

Now it's time to solder the button's connectors. Solder the short wire from the LED to one of the connectors. To the same connector, also solder a long wire that should connect to the Pi. This is now the ground wire for both LED and button. Solder another long wire to the other connector. This is the GPIO pin wire. When done, attach the ground wire to one of the ground pins, the LED wire to pin GPIO17 and the button wire to pin GPIO2. A complete pinout of the Raspberry Pi 3 can be found here.

The code

The code for running the application can be found at http://www.github.com/asalilje/musicbox. It contains a Dockerfile that just packages the code without running npm install. This must be done on the host, since the epoll library used for pin connection needs to be installed on the correct host.

There is a Docker image for the project: asalilje/portablemusic. If you want to build the image yourself from the code you write:

docker build -t yourtag .

Deploy to Pi using Docker

SSH in to your Pi. The code needs some environment variables for your Twitter account. If you don't have any, go to apps.twitter.com and create a new app. Since these variables won't change we put them in a file. In the directory /home/pi create a new file named env.list:

sudo nano /home/pi/env.list

In that file, insert your own tokens:

TWITTER_CONSUMER_KEY=your_own_consumer_key TWITTER_CONSUMER_SECRET=your_own_consumer_secret TWITTER_ACCESS_TOKEN_KEY=your_own_access_token_key TWITTER_ACCESS_TOKEN_SECRET=your_own_access_token_secret

When this is done, you can run the app using the following command:

docker run -w=/app --network=host --name=your_container_name_here -t --privileged --restart=unless-stopped -e SPOTIFY_PLAYLIST_URL="your_playlist_here" --env-file=/home/pi/env.list your_image_here /bin/bash -c "npm install; node app.js"

Let's go through this command in detail.
  • -w=/app is the working directory inside the container.
  • --network=host makes the container run on the same network as the host, the Pi. This means we can call localhost on the Pi from within the container
  • --name is the name that will show up in the list of running containers
  • -t will let us terminate the Docker run command and get control of the command prompt again using CTRL+C
  • --privileged means the container will run in privileged mode and be able to access the pins
  • --restart=unless-stopped makes sure the container restarts after reboot
  • -e is an environment variable that the node app inside can access using process.env.VARIABLE_NAME
  • --env-file is the path to a file containing multiple enviroment variables
  • your_image_here is the path to the Docker image
  • /bin/bash -c is a list of commands that will be run in the container, in the given work directory. Here we run npm install and then start the app.
In my case, using the Docker image I built and a nice 80's Spotify playlist, the command looks like this:

docker run -w=/app --network=host --name=portablemusic -t --privileged --restart=unless-stopped -e SPOTIFY_PLAYLIST_URL="spotify:user:asaki:playlist:12V6KDKzfi3m0qRZZTXeCb" --env-file=/home/pi/env.list asalilje/portablemusic /bin/bash -c "npm install; node app.js"

When it comes to playlists, Mopidy will only be able to load playlists from your own user account.

Now, place the box and speaker somewhere and enjoy your music!

Thursday, 15 June 2017

Mob programming for managers

I could give you many, many reasons as to why mob programming is a great way of working. I've practiced it full time for two years now, at two different companies, and I really see no reason to go back to working alone. Together with my two colleagues H├ąkan Alexander and John Magnusson, I've been speaking about the subject at more than a dozen companies in Stockholm and at a couple of conferences. In short, we're into it. Big time.

At my current assignment, SEB, one of the biggest banks in Sweden, we've been mob programming since I came into the project. In fact, most of the issues I encounter regarding poor quality and late deliveries, I strongly feel can be helped by mob programming.

Of course, working this way, sitting 4 or more developers around one computer, can trigger a few questions. Is it really efficient? How much does every line of code cost? One person is active and the other ones are just sitting around looking at their phones? Will management allow it?

Views on mob programming

To be honest, when we're out speaking about mob programming, managers are almost always positive. We speak about better quality, faster deliveries, better throughput, less time spent on fixing bugs.

Developers are more skeptic, especially the senior ones. Some feel that their work is too complex and they need to solve problems undisturbed and alone.

Junior developers are often very positive though. The things they could learn sitting together with the senior developers, instead of struggling alone through legacy systems where technology as well as domain is unchartered territory for them!

It's not that strange though, that managers encourage it and developers resist. For managers, this won't mean anything for their day to day job. It's always easy to encourage someone else to change their ways. For developers, on the other hand, this will deeply affect their everyday work.

SEB leader day plans

One day, our lovely agile coach Anna Borgerot came by at SEB and asked me if I wanted to help arrange the SEB IT Leader Day. 120 leaders within the IT organization from Sweden and Lithuania would meet up in Stockholm for a whole day with the theme of Learning. They had come up with the idea of letting the leaders do some coding, inspired by the Hour of Code-movement.

Anna loved the way we were mob programming (she said it warmed her heart to see us :) ) and thought that would be a great way to inspire everyone at the event to actually sit in front of the computer and do some coding, even though they might never have done it before.

So naturally, I said yes. Such a great opportunity to spread the mob programming gospel and to actually observe how people react when faced with a new team, a new way of working and a task way outside of their comfort zone. My view is that this is the natural way of solving problems for us, but once we head out into the work life, we're supposed to be efficient and go it alone. And - surprise - one mind does not think as well as four.

The coding task

To begin with, we realised it could not be just me managing the two hour slot of introducing mob programming, coding and reflecting. Three more developers from SEB were asked to help: Andreas Frigge, Andreas Berggren and Magnus Crafoord. We thought about what the actual coding exercise should be and ended up with Minecraft Designer found at code.org. Minecraft Designer is a block based application consisting of 12 steps of different tasks, with short movies in between explaining the coding concepts.

We all tried it and decided it would be something that would work for everyone, coding experience or not. The recognition of coding Minecraft was nice as well, something they might do at home with their kids later.

We also gave them the actual task, when they were done with the 12 mandatory steps we wanted them to build their own game, using what they had learned. There were some requirements, but quite vague. So we gave them a fixed deadline of one hour, a new way of working that they hadn't chosen themselves, and vague requirements. Totally realistic, in other words!

Dress rehearsal

In order to see if our idea for the two hours given to us would work out, we did a dress rehearsal two weeks in advance. Anna found 12 willing test pilots that could help us. This was incredibly helpful! We learned that my mob programming introduction had to be more geared towards the upcoming coding task, that we had to steer the dividing into teams better, that the written instructions about setting up the timer and the actual coding had to be much clearer and the screens, keyboards and mouse at each station had to be checked. When running through it with real non coding people, we ended up making small changes to almost everything.

We also noticed something else. They were laughing, pointing, discussing and creating stuff. Everyone participated. We started to feel quite good about the upcoming big day.

The leader day event

When the IT leader day finally arrived, we had the following schedule:
  • Intro to mob programming, 15 minutes.
  • Divide into teams, 4 at each table, 10 minutes. We took care ensuring they didn't work with the people they normally work with.
  • Setup the mob timer and programming environment, 10 minutes. Everyone had their own computers and we had 30 tables with screen, keyboard and mouse. We also asked them to use cool hacker names in the timer, which turned out to be a fun task that got the energy going in the room.
  • The coding task, 60 minutes.
  • Reflection in the team, 10 minutes. We had prepared a sheet of questions to help them.
  • Joint reflection, pass the mic, 10 minutes.


Reflections from my side during the actual event were these: everyone coded. They followed the timer that was set at 7 minutes. They laughed. They were active. They were loudly discussing the problems, solving all the tasks together. As I was walking the room, it was obvious how natural and powerful this way of working was.

At the joint reflection afterwards, one of the participants expressed that he was surprised that they had actually managed to solve this task and it was all due to working together. Another said that she actually felt she participated more when being a navigator than when being a driver. Great reflections, and so true!

More about the event can be found here, at SEB:s website.

Comments afterwards

Getting the written opinions on the mob programming session a couple of days after the event was truly awesome:
  • Mob programming is DA SHIT!
  • Mob programming – WOW!
  • Interesting interaction!
  • Fun to do some programming that also got you to think of ways of working.
  • Good to focus on development and IT-competence.
  • Inspirational to hear about the mob programming method.
  • Great with mob programming (outside my comfort zone which is good for me to be!)
  • The introduction to Mob programming was the best – loved the simplicity and clarity.
  • MOB - great way of working - will try that in my department.
  • Loved the mob programming!
  • Fun/useful to try mob programming.
  • An extremely powerful way of solving problems!
Can't be anything else than happy about those comments. The week after I also started to get bookings in my Outlook calendar from managers wanting me to speak about mob programming at their departments. So yay, great success!


Will SEB start mob programming everywhere now?

Mob programming is something that I personally am very passionate about. But one thing to watch out for of course is this: no one wants to be told how to do their work. The way a team works must come from within the team. Inspiration is great, trying different things is great, but it has to be a team decision.

Showing IT leaders that mob programming is a good way of working mainly achieves this: it might remove any future obstacle of managers thinking it's a waste of money and time. It might give teams the opportunity and possibility to try it out. It might help managers embrace that not all has to be done according to the standard process and beliefs. Hopefully in the end, some teams will get inspired to try it and see the benefits!

Sunday, 12 February 2017

Build an info station using Adafruit Feather

Everyone needs an info station. Press a button to quickly get the information you want! This project uses a Feather Huzzah with Wifi, an arcade button and an OLED i2C display.

What does it do?

  • On startup, the info station connects to your WiFi.
  • It displays a message: 'please press button'.
  • When pressed, the API of your choice is called, the response is parsed and displayed.
  • After a given duration, the display goes back to showing the message: 'please press button'.

In my case, I have a bus stop outside my building. When I press the button, it fetches the real time data showing when the next buses are due. The info updates once a minute for 5 minutes, then goes back into sleep mode. The reason I don't show the data all the time is that the API can only be called a limited number of times every month.



Step 1 - Prettify the button

The arcade button I bought looks nice, but I felt it would look even nicer if it had a LED light inside. But since a LED would be hard to fit in there, I decided to go with a LED sequin instead. I usually use these for wearables, but the sequin is very easy to work with; one end connects to voltage and the other to ground. So just start with soldering two wires onto the sequin. Make sure you use different colors on the wiring so you later know which is plus and minus.

.

To test the wiring, just connect your Feather to a computer using a micro USB cable and hold the ends of the sequin wires to the pins for 3V(+) and GND(-). The sequin should light up.

Use a small screwdriver to pry open the button by pressing the clips on both sides. Glue the LED sequin to the inside of the actuator so it will shine through the white plastic. Carefully put the button together again, pulling the wires out through the side slits without damaging the solder joints or wires. Test the sequin again using the Feather. A lovely shiny button! Who could possibly resist pressing it?



Step 2 - Solder pin headers into the Feather pads

In all Arduino and Raspberry Pi projects, one thing to remember is to always test the components before soldering. The easiest way to do that is by using a breadboard. If the pins are just pads, like on the Feather, I usually solder pin headers into them so I can plug everything into a breadboard and try out connections and code.

The Feather either comes with pre-soldered headers, or with a set of headers that you can solder yourself. There's no need to solder all of the pins. The ones used for this project are 3V, GND, GPIO2, GPIO4 and GPIO5. When you solder the headers, plug the long end of the pins in the header strip into the breadboard, place the Feather over the pins and solder the short end of the pin that's poking up through the pad. Now you can connect jumper wires and test out your connections to other components.


Step 3 - Test the Feather

To use the Feather Huzzah, we need to install the ESP8266 Board Package in the Arduino IDE. Under Preferences >> Additional Boards Manager Urls, add the url http://arduino.esp8266.com/stable/package_esp8266com_index.json. Next, use the Boards manager to install the ESP8266 package.


Restart the IDE and you should now be able to select the board Adafruit HUZZAH ESP2866 in the Boards manager.


Select the correct USB serial port under Ports and connect the Feather using a micro USB cable. Open a new sketch and insert the following code:
  void setup() {
    pinMode(0, OUTPUT);
  }

  void loop() {
    digitalWrite(0, HIGH);
    delay(500);
    digitalWrite(0, LOW);
    delay(500);
 }
The sketch will blink the built in red LED on GPIO0 every 500 ms. Save the sketch and press Upload to upload it to your Feather. If you have trouble connecting to the Feather it can be due to a faulty USB cable, it has to be able to transfer data, or issues with discovering the correct serial port. I have one USB cable that I know work well, and many that just won't connect my boards. If your LED blinks on your first attempt, congratulations! :)


Step 4 - Connect and test the button

To connect and try out the button, solder wires onto the gold plated connectors of the button. Use shrinking tube to cover the joints.

Peel and tin 5mm at the other end of the wires so you can push the wires into the breadboard. Connect one wire from the button to ground on the Feather and the other to GPIO2.

In the Arduino IDE, find the example Button under Examples >> 02.Digital. The example lights up a LED when a button is pressed. Most boards have a built in LED you can use. On the Feather, the built in red LED is on GPIO0. So change the sketch to use 0 for the LED pin, upload the sketch to the Feather and make sure your button works and the Feather can detect the state changes when you press the button.


Step 5 - Connect and test the display

The display I chose is an i2c OLED display with pre-soldered headers and 4 pins, very easy to work with. SPI displays are generally a bit faster but need more pins. Some microcontrollers are more suited for SPI, some displays need a bit of tampering to use i2c. But both work fine, it's just a matter of changing the wiring and number of pins.

Plug the display into the breadboard next to the Feather using the headers. Using male to male jumper wires, connect VCC to 3V, GND to GND, SCL to GPIO5 and SDA to GPIO4.

To communicate with the OLED display we need to install the library Adafruit SSD1306. Go to the Github repo and download a zip-file of the repo. Unpack it, rename it Adafruit_SSD1306 and place the folder in your Arduino/libraries/-folder. If this is the first library, you might have to create the folder libraries. Then do the same with the Adafruit GFX Library. This folder should be named Adafruit_GFX and placed in the same libraries-folder as SSD1306.

Restart the IDE and open File>>Examples. You should now have access to the Adafruit SSD1306-examples.


Pick the example corresponding to your display. In my case, the 128x64 i2c. Since my display does not have a RESET-connector, I change the pin for OLED_RESET to the default, which is -1. I also change the initiation of the display to the correct i2c-address, 0x3C. To find out the i2c-address, you can use the i2c-scanner from Arduino Playground.
  #define OLED_RESET -1
  display.begin(SSD1306_SWITCHCAPVCC, 0x3C);  
Now upload the sketch to your Feather. Hopefully you have a working connection between the two components, and a working display.


Step 6 - The code

Now when all things are working and connecting to each other, it's time to try it out with the actual code. My example can be found at github.com/asalilje/nextbus. In order to get that exact code to work, you need to register with trafiklab.se to get an API-key, and add your wifi SSID and password. You also need to add the Arduino Json-library to Arduino IDE. But I'm sure the buses at my stop are really irrelevant to you, so do whatever you want here. There are lots of fun API's to play around with. :)

I chose to do the JSON-parsing on my Feather. In retrospect, I should have built a Node API on a Raspberry Pi that called the external API and fetched the nicely parsed data from there instead.

Whatever you choose to do, make sure your application works as expected before you start to solder and encase the components.


Step 7 - Putting it all together

Think carefully before you start to put all the components together. It's a good idea to solder one component at the time and check after every step that it still works as expected. Nothing worse than soldering everything at the same time and then discover it's not working and be completely lost as to where it's gone wrong. Trust me, I've been there...

Since the button snaps into a hole from the top down, it needs to be mounted before the wires are attached to the Feather. I used a simple small cardboard with a top lid and mounted the button first. Then I soldered the button wires to GND and GPIO2, and the LED sequin wires to GND and 3V. Button done, yay!

I almost always keep the headers when I'm soldering components together, since I think it's easier to get right than soldering wires directly into the pads. I solder the wires onto the pins and then use shrinking tube to cover both joints and pins. Heating shrinking tube with a hair dryer works perfectly!


For the display, solder VCC to 3V, GND to GND, SCL to GPIO5 and SDA to GPIO4. As you notice, GND and 3V on the Feather are connected to multiple components. You might want to twin those wires together and tin them into one before soldering them onto the Feather pin.

That's basically it! Mount your info station on the wall where you need access to your quick info and enjoy the seconds you save by not having to get exactly the same information on your phone. :)

Tuesday, 11 October 2016

Handling mocks and environmental variables in JS-apps through Webpack

When JS-apps need different variables in production and locally, one simple way to solve that is by using Webpack. Say you work with an app that calls an API using code similar to this:
DoRequest("GET", "http://swaglist.api.dev")
  .then(data => {
    const result = JSON.parse(data);
    if (result && result.swaglist) {
      this.setState ({
        groovythings: result.swaglist
      });
    }
  })
  .catch(error => {
    this.setState({
      error
    });
  });
We want to be able to use a variable instead of the hardcoded API. Using the different configs for Webpack in dev and prod makes this an easy task.

Setting up Webpack's DefinePlugin

Take a simple Webpack config for a React application, like the following:
var webpack = require('webpack');
var path = require('path');

var config = {
  devtool: 'inline-source-map',
  entry: [
    path.resolve(__dirname, 'src/index')
  ],
  output: {
    path: __dirname + '/dist',
    publicPath: '/',
    filename: 'bundle.js'
  },
  module: {
    loaders: [
      { test: /\.js$/, exclude: /node_modules/, loader: "babel-loader" },
      { test: /(\.css)$/, loaders: ['style', 'css']},
    ]
  },
};

module.exports = config;
Let's presume we have completely different Webpack configs for dev and prod. First we add a global config-object at the top of the file:
var GLOBALS = {
  'config': {
    'apiUrl': JSON.stringify('http://swaglist.api.dev')
  }
};
Don't forget to stringify! Then we add a new plugin in the config section:
  plugins: [
    new webpack.DefinePlugin(GLOBALS)
  ],
And now we can use the variable in our application:
DoRequest("GET", config.apiUrl)
  .then(data => {
    const result = JSON.parse(data);
    if (result && result.swaglist) {
      this.setState ({
        groovythings: result.swaglist
      });
    }
  })
  .catch(error => {
    this.setState({
      error
    });
  });

Adding a mock API

Using this approach, it's very easy to set up a way to temporarily use a mock instead of a real API. This is a great help during development if the API in question is being developed at the same time. Or if you're working on the train without WiFi. :)

I like to use NPM tasks for my build tasks, in those cases where a task runner like Grunt or Gulp is not really needed. My NPM tasks in package.json typically look something like this:
  "scripts": {
    "build:dev": "npm run clean-dist && npm run copy && npm run webpack:dev",
    "webpack:dev": "webpack --config webpack.dev.config.js -w",
    "build:prod": "npm run clean-dist && npm run copy && npm run webpack:prod",
    "webpack:prod": "webpack --config webpack.prod.config",
    "clean-dist": "node_modules/.bin/rimraf ./dist && mkdir dist",
    "copy": "npm run copy-html && npm run copy-mock",
    "copy-html": "cp ./src/index.html ./dist/index.html",
    "copy-mock": "cp ./mockapi/*.* ./dist/"
  },
Now, to add a build:mock-task to use a mock instead of the real API, let's start by adding two tasks in package.json.
"build:mock": "npm run clean-dist && npm run copy && npm run webpack:mock",
"webpack:mock": "webpack --config webpack.dev.config.js -w -mock",
Build:mock does the same as the ordinary build:dev-task, but it calls webpack:mock instead. Webpack:mock adds the flag -mock to the Webpack command. Arguments to Webpack are captured using process.argv. So we just add a line of code at the top of webpack.dev.config.js to catch it:
var isMock = process.argv.indexOf('-mock') > 0;
Now we can change the GLOBALS config-object accordingly. The resulting Webpack config looks like this:
var webpack = require('webpack');
var path = require('path');

var isMock = process.argv.indexOf('-mock') > 0;

var GLOBALS = {
  'config': {
    'apiUrl': isMock
      ? JSON.stringify('./mock-swag.json')
      : JSON.stringify('http://swaglist.api.dev')
  }
};

var config = {
  devtool: 'inline-source-map',
  entry: [
    path.resolve(__dirname, 'src/index')
  ],
  output: {
    path: __dirname + '/dist',
    publicPath: '/',
    filename: 'bundle.js'
  },
  plugins: [
    new webpack.DefinePlugin(GLOBALS)
  ],
  module: {
    loaders: [
      { test: /\.js$/, exclude: /node_modules/, loader: "babel-loader" },
      { test: /(\.css)$/, loaders: ['style', 'css']},
    ]
  },
};

module.exports = config;
The mock is nothing more advanced than a JSON-blob with the same structure as your API:
{
  "swaglist": [
    {
      "thing": "Cats",
      "reason": "Because they're on Youtube."
    },
    {
      "thing": "Unicorns",
      "reason": "Because it's true they exist."
    },
    {
      "thing": "Raspberry Pi",
      "reason": "Because you can build stuff with them."
    },
    {
      "thing": "Cheese",
      "reason": "Because it's very tasty."
    }
  ]
}
Now, run the build:mock-task and let the API-developers struggle with their stuff without being bothered. :)

Monday, 26 September 2016

Building a faceted search using Redis and MVC.net - part 4: Using Redis in an MVC-app

There's a number of .Net clients available as Nuget-packages. I've chosen to use StackExchange's Redis, github.com/StackExchange/StackExchange.Redis. It maps well against the commands available in the Redis Client, it has a good documentation and, well, Stack Overflow uses it so it really ought to cover my needs... And of course, it is free.

The demo web for the faceted search is available at hotelweb.azurewebsites.net and code can be found on github.com/asalilje/redisfacets.

Connecting to Redis

Once the StackExchange.Redis nuget package is installed in the .Net-solution, we can try a simple Redis query. We want all Hotels that have one star, i e all members of the set Stars:1:Hotels.
  var connection = ConnectionMultiplexer.Connect("redishost");
  var db = connection.GetDatabase();
  var list = db.SetMembers("Stars:1:Hotels");
The list returned is the JSON-blobs we stored for each hotel, so we need to deserialize it to a C#-entity using Newtonsoft.
  var hotels = hotels.Select((x, i) =>
  {
    var hotel = JsonConvert.DeserializeObject(x);
    hotel.Index = i;
    return hotel;
  });
Now, the ConnectionMultiplexer is the central object of this Redis Client. It is expensive, does a lot of work hiding away the inner workings of talking to multiple servers and it is completely threadsafe. So it's designed to be shared and reused between callers. It should not be created per call, as in the code above.

The database object that you get from the multiplexer is a cheap pass through object on the other hand. It does not need to be stored, and it is your access to all parts of the Redis API. One way to handle this is to wrap the connection and Redis calls in a class that uses lazy loading to create the connection.
  private static ConnectionMultiplexer Connection => LazyConnection.Value;
  private static readonly Lazy LazyConnection =
    new Lazy(() => ConnectionMultiplexer.Connect("redishost"));

  private static IDatabase GetDb()  {
    return Connection.GetDatabase(Database);
  }

  public static string GetString(string key)  {
    return GetDb().StringGet(key);
  }

Fine tuning the queries

Let's return to the concepts from the earlier parts of this blog series, combinations of sets. Say we want to get all hotels in Germany that has a bar. Just send in an array of the keys that should be intersected.
  var db = GetDb();
  return db.SetCombine(SetOperation.Intersect, 
    new []{"Countries:1:Hotels", "Bar:False"};
The chosen keys in the same category should be unioned before they are intersected with another category. As we did before, we union them and store them in the db to be able to do the intersection directly in Redis. In this case, we also send in the name of the new key to store, compounded from the data it contains.
  var db = GetDb();
  db.SetCombineAndStore(SetOperation.Union, "Countries:1:Countries:2:Hotels", 
    new []{"Countries:1:Hotels", "Countries:2:Hotels"});
  return db.SetCombine(SetOperation.Intersect, 
    new []{"Countries:1:Countries:2:Hotels", "Bar:False"};
If we want to sort the list according to an external key, we just add the by-keyword in the sort-command to point to the correct key, using the asterisk-pattern.
  var db = GetDb();
  db.Sort("Countries:1:Hotels", by: "SortByPrice_*", get: new RedisValue[] {"*"}));

Putting it all together

Now we have the concepts and data modelling of Redis and the Redis client in place. And the rest is basically just putting the things together. The filtering buttons are created dynamically according to what options are available in the db. Each time a filter or sorting option is clicked, or a slider is pulled, an event is triggered in javascript that creates an url based on which buttons are chosen.

The call goes via AJAX to the MVC-app that does all the filtering using unions and intersections, fetches and sorts the final list, and disables or enables any affected filter buttons.

All this, as you know, can be done in a number of ways. If you need inspiration or some coding examples, take a look at the code on github.com/asalilje/redisfacets. :)

Friday, 23 September 2016

Leader Election with Consul.Net

Microservices are great and all that, but you know those old fashioned batch services, like a data processing service or a cache loader service that should run with regular intervals? They're still around. These kind of services often end up on one machine where they keep running their batch jobs until someone notices they've stopped working. Maybe a machine where it runs for both stage and production purposes, or maybe it doesn't even run in stage cause no one can be bothered. Easier to just copy the database from production.

But we can do better, right? One way to solve this is to deploy the services to multiple machines, as you would with a web application. Use Octopus, deploy the package, install and start the service, then promote the same package to production, doing config transforms along the way. Problem then is that we have a service that runs on multiple machines, doing the same job multiple times. Unnecessary and, if there's a third party API involved, probably unwanted.

Leader election to the rescue

Leader election is really quite a simple concept. The service nodes register against a host using a specific common key. One of the nodes is elected leader and performs the job, while the other ones are idle. This lock to a specific node is held as long as the node's session remains in the host's store. When the node's session is gone, the leadership is open for taking by the next node that checks for it. Every time the nodes are scheduled to run their task, this check is performed.

Using this approach, we have one node doing the job while the others are standing by. At the same time, we get rid of our single point of failure. If a node goes down, another will take over. And we can incorporate this in our ordinary build chain and treat these services like we do with other types of applications. Big win!

An example with Consul.io

Consul is a tool for handling services in your infrastructure. It's good at doing many things and you can read all about it at consul.io. Consul is installed as an agent on your servers, which syncs with one or many hosts. But you can run it locally to try it out.

Running Consul locally

To play around with Consul, just download it here, unpack it and create a new config file in the extracted folder. Name the file local_config.json and paste in the config below.
{
    "log_level": "TRACE",
    "bind_addr": "127.0.0.1",
    "server": true,
    "bootstrap": true,
    "acl_datacenter": "dc1",
    "acl_master_token": "yep",
    "acl_default_policy": "allow",
    "leave_on_terminate": true
}
This will allow you to run Consul and see the logs of calls coming in. Run it by opening a command prompt, moving to the extracted folder and typing:
consul.exe agent -dev -config-file local_config.json

Consul.net client

For a .Net-solution, a nice client is available as a Nuget-package, https://github.com/PlayFab/consuldotnet. With that, we just create a ConsulClient and have access to all the API's provided by Consul. For leader election, we need the different Lock-methods in the client. Basically, CreateLock is creating the node session in Consul, AcquireLock is trying to assume leadership if no leader exists, and the session property IsHeld is true if the node is elected leader and should do the job.
var consulClient = new ConsulClient();
var session = consulClient.CreateLock(serviceKey);
await session.AcquireLock();
if (session.IsHeld)
    DoWork();

A demo service

Here's a small service running a timer updating every 3 seconds. On construction, the service instance creates a session in Consul. Every time the CallTime-function is triggered, we check if we hold the lock. If we do, we display the time, otherwise we print "Not the leader". When the service is stopped, we destroy the session so the other nodes won't have to wait for the session TTL to end.
using System;
using System.Threading;
using System.Threading.Tasks;
using Consul;
using Topshelf;
using Timer = System.Timers.Timer;

namespace ClockService
{
    class Program
    {
        static void Main(string[] args)
        {
            HostFactory.Run(x =>
            {
                x.Service(s =>
                {
                    s.ConstructUsing(name => new Clock());
                    s.WhenStarted(c => c.Start());
                    s.WhenStopped(c => c.Stop());
                });
                x.RunAsLocalSystem();
                x.SetDisplayName("Clock");
                x.SetServiceName("Clock");
            });
        }
    }

    class Clock
    {
        readonly Timer _timer;
        private IDistributedLock _session;

        public Clock()
        {
            var consulClient = new ConsulClient();
            _session = consulClient.CreateLock("service/clock");
            _timer = new Timer(3000);
            _timer.Elapsed += (sender, eventArgs) => CallTime();
        }

        private void CallTime()
        {
            Task.Run(() =>
            {
                _session.Acquire(CancellationToken.None);
            }).GetAwaiter().GetResult();

            Console.WriteLine(_session.IsHeld 
                ? $"It is {DateTime.Now}" 
                : "Not the leader");
        }

        public void Start() { _timer.Start(); }

        public void Stop()
        {
            _timer.Stop();
            Task.WaitAll(
                Task.Run(() =>
                {
                    _session.Release();
                }),
                Task.Run(() =>
                {
                    _session.Destroy();
                }));
        }
    }
}

When two instances of this service are started, we get this result. One node is active and the other one is idle.


When the previous leader is stopped, the second node automatically takes over the leadership and starts working.


All in all, quite a nice solution for securing the running of those necessary batch services. :)