Network-Social groupwork self-learning "course" on mondays, learning honing Github-centric skills

Git hooks are cool, lets aim for deploying with them

Indeed git hooks are cool. Here is what Iā€™m currently using in another project. I intentionally left the first local repository hook file not full as I found it good to just inject a single line in the existing sample and use most of it as is.

What I am achieving with this?

The pre-commit hook runs whatever tests I have deemed necessary to run before every commit. If those tests fail, the commit does not happen and I have to fix my code. This is all done on my local developer machine and the tests run with mocha really fast. Mostly these are the same tests that gulp mocha-watch runs at each save, but they could be more. Iā€™m using the npm standard test entry point ā€˜testā€™.

I am using a specific named branch for deploying the code to the remote repository. Since it is not a developer repository, Iā€™m not really interested in what other branches there are, but git requires me to push to a branch not currently checked out. So, the pre-receive hook checks out the master branch (pretty lonely and empty on the remote since Iā€™m never pushing master there) before the push happens. Then, after the push transfer has happened, post-receive is run on the remote repository. Currently it just checks out the specific branch for deployment and notifies me to restart my service.

Post-receive on the remote repository would be a nice place to put a trigger into restarting the service. That however is not yet happening as I am not recording the process id of running node process anywhere and Iā€™m not a big fan of using shotgun tools like killall. Whenever I make it so that the service can stop using the standard npm entry point ā€˜stopā€™, npm can offer the implicit ā€˜restartā€™ and that then will in turn be edited into the post-receive script.

Also, a staging server for more automatic tests could be used with these. That would then be mostly interested in post-receive hooking the extensive tests and possibly on success automatically merging the changes into a branch that always only has code that has passed all automatic tests. Now, using even more automation, we would only ever push code into this staging server and it then would either notify us with some annoying failure messages or push the code into production automatically.

So, after the long winded discussion, lets see the code:

In my developer local repository:
file .git/hooks/pre-commit

#!/bin/sh
# lots of stuff from the sample
npm test || exit 1
# the exec stuff from the sample

Critically, the last piece that runs a command through exec should be left last or deleted or something. I left it as itā€™s doing something reasonable and reasonable automatic tests never hurt anybody.

In the remote repository:
file .git/hooks/pre-receive

#!/bin/sh
cd ..
GIT_DIR='.git'

echo 'PRE-RECEIVE'
git checkout master

file .git/hooks/post-receive

#!/bin/sh
cd ..
GIT_DIR='.git'

echo 'POST-RECEIVE'

umask 002 && git reset --hard

git checkout deploy
cat <<EOT
Now just restart the service by hand. No CI yet. Sorry.
EOT

I was trying to setup a basic pre-commit git hook setup for the calendar-controller pull request branch on my laptop. It works finally. What I do is I run mocha test for checking the events creation and execution. I receive the response from mocha in JSON and check if failure count is greater than 0, if so I print the failure message and abort the commit. Works pretty well.

#!/usr/bin/env sh
echo "\nRunning tests, please wait ...\n"
no_of_failures=`mocha -R json --bail --timeout 10000 unit_test_events.js | jq '.stats.failures'`
if [ $no_of_failures != 0 ];then
  msg=`mocha -R json --bail --timeout 10000 unit_test_events | jq '.failures[] | {Error: .fullTitle}'`
  echo $msg
  exit 1
else
  echo "All tests passed, proceeding to commit.\n"
fi

By the way I have used jq (https://stedolan.github.io/jq/) as a command line json parser.

As discussed yesterday, that shell script soon turns into a large and complex piece that is hard to understand.

Currently recommended:

.git/hooks/pre-commit:

#!/bin/sh
npm test || exit 1

This naturally expects a script test be defined in package.json. It can be done like so:

package.json:

"scripts": {
    "test": "node ./node_modules/gulp/bin/gulp.js mocha"
}

Further on down the line, a gulp target mocha is needed. Any special command line arguments for this occasion of mocha should go into package.json.

gulpfile.js:

var gulp = require('gulp');
var mocha = require('gulp-mocha');
var gutil = require('gulp-util');

gulp.task('mocha', function() {
    return gulp.src(['test/*.js'], { read: false })
        .pipe(mocha({ reporter: 'min' }))
        .on('error', gutil.log);
});

As can be seen, the test-runner mocha is needed and used. It can be called directly from the command line, from gulp or via npm. Iā€™ve seen that presetting the command arguments in gulpfile is good, especially as gulp can watch files very much like coffee --watch.

Finally, I see npm as the standard tool to interact with our packaged software, so specifying a script for npm is a good thing. Keeping each abstraction level relatively simple, tracking down the call chain is kept reasonably followable, even though the chain is rather long.

The pre-commit hook in turn is delightfully simple and easy to pick up. Not to mention that the pre-commit and package.json require modification very rarely when set up this way.

edit: fixed a typo

That is relevant, good that we have the sample code here for ready reference.

We probably need to stop using periodic-task library and start using async-polling, as it came out to be relevant from the recent discussion I had with the author. See here.

Today we discussed with Sayantan about what would be near future goals of our project.

Iā€™ll just paste here the slightly mind farty text I made as notes. These notes are also available for direct editing in our shared Google Drive folder. If you are reading this and want access to that, please inquire me.

In effect, we wish to have some mechanisms to automate a very simple system consisting of several CoAP-speaking embedded devices in the following categories:

  1. light
  2. switch
  3. thermometer

The switch will be used to mock all kinds of events such as motion detectors.

Notes:
ā€“8<ā€“8<ā€“
coap RD <-> javascript
to find out ip address of the devices based on their id

internal configuration of devices and rules to form a system
indexed by device id, hooks to rules and events and so on

rule engine
a set of rules that define how the system reacts to stimulus
rule ids
rules have event callback information

promises, lets have them

whenever sending a coap query to some device, also prepare
something (a promise, an event-in-waiting) in pool for the coap receiver to find with the
device id and query id this basically hooks incoming coap to coap we have sent,
the rest being either incoming queries we need to react to somehow or something
we can log for debugging purposes and drop on the floor

Hi all

Next monday, tomorrow, 2016-03-28, is possibly cancelled due to holidays.

At least I am not attending but naturally the lab might be populated. Please do ask on our irc channel if you plan to go tomorrow to the lab if someone is going to be there and when.

  • t

Alright, I will skip too today. See you later.

Weā€™ve been a bit slow lately but nevermind. Iā€™ve been working on our infrastructure meanwhile so that plushie could grow better. Iā€™ve arranged that a locally permanently installed node sitecontroller.local can have ssh key git access and that thanks to Sayantanā€™s work, we can hook it up with git hooks so that with git push it will also restart the service. Ask me for ssh key auth and youā€™ll receive.

At the lab I have installed a special router that can route traffic between sites in POINT network research project. This is pretty interesting as we can connect plushie and our local devices with similar setups in several sites around Europe. The POINT project also funds developing the embedded parts as well as the router software, so that they become essentially free for our use.

The CoAP embedded servers are also approaching more useful levels. We should now think a bit about what kind of automation weā€™d run based on the device types:

  1. lamp
  2. relay box
  3. temperature sensor
  4. motion detector
  5. switch input (pushbutton or toggle)
  6. luminosity sensor (needs work still)

We could for example with help from the relay boxes take control of the light system in the labs lounge. Soon we are going to install new lights in the lounge and they could definitely benefit from automation. So, lets discuss the different scenes we have:

  1. nobody is in, lights off
  2. people are in, working type lights (all on)
  3. people are in, lecture (all on, lecturer selects from subscenes)
    3.1) all on
    3.2) all on, front wall tv section off
    3.3) all on, front section off
    3.4) all off, front section on
    3.5) all off, whiteboard on
    3.6) all off

This is just a draft and as you see, some scenes share the output with other scenes. This is completely okay and they should not be considered same scenes. At this point you should think hold on a bit isnā€™t this a state machine? It is.

Oh, and we also have some DALi light controllers from Helvar that we could later on hookup to our system as well. Alpha quality Node.js code exists in Github.

So this is a very rambly post but my message is: sorry for letting things go a bit stale and lets get back to kicking :slight_smile:

1 Like

Hello Teemu,

Good to know that things are going well. I had not been around for a few weeks. I told you that I will be on time but could not be present. Workload has increased in office so I am unable to leave for Hacklab at 5. Also I was unable to sit with the project at home. I am extremely sorry for that. At this moment I think I need to take a break. I will start again later when things will be easier for me. Hope you donā€™t mind.

Sayantan ā€¦

Hi Sayantan

Thatā€™s all fine. We will continue with the program just fine, thank you for helping get it started so nicely :slight_smile: Personally I have misused the monday sessions to do other tasks so they have not been as useful for co-learning programming skills anyway.

Also, weā€™re calling quiet on the monday sessions as there has been very little participation. If interest raises, we can start having these sessions again.

Anyway, hoping to see you at the lab in generic social terms :slight_smile: