MicroCule, A Love Story

Often I find myself in a situation that requires a quick and dirty custom micro-service to support one Proof-Of-Concept or another, with a wide array of NPM available module I usually resort to nodejs and in few minutes I’d have a nodejs script that does exactly what I wanted until just few weeks ago the only option I had for hosting that service would have been hook.io, a free micro-services hosting service that provides decent yet none stellar performance QoS such as with any cloud based free service often things didn’t work at the required performance level and sometimes the nodejs module I wanted wasn’t available on the server, however other than starting my own app engine and installing all the containers and all the associated hassle I had to make do with whatever was generously offered by hook.io.

In comes Micro-Cule,  Micro-cule is a Software Development Kit and Command Line Interface for spawning streaming stateless HTTP microservices for any programming language or arbitrary binary.

Installed on my amazon micro instance it doesn’t take more than one, yes ONE command to spawn a fully qualified micro-service to do my bidding, and here is the fun part it supports 20 different languages, so basically its like self hosting a very lean easy to manage middleware that can turn any of your hacked scripts into webservices. Micro-Cule is part of hook.io project but it offers the core service without the whole hook.io array of bells and whistles which I think is a very smart move from hook.io people, given that most of the potential users would just want to run their own webservices rather than offer a webservices hosting service.

I’m in love with microcule given how it has liberated me from heroku, google apps, amazon lambda and the even more cumbersome self hosted solutions and For all means and purposes i think microcule is the perfect webservices hosting solution for prototyping, testing and development, perhaps even production with some careful configuration.


Automating the Nespresso Coffee Machine part 2

In this part I explain how to hook up api.ai agent with particle using hook.io as middleware.

Hook.io will get the invocation call from api.ai and acts based on the action and action parameters by calling the correct function on the particle cloud. The responding with the api.io payload to be displayed to the requester.

The first step is building a new hook on hook.io and pasting the following script, modify the access token and device id.

module['exports'] = function coffeeNator (hook) {
 var myResponse;
 var Particle = require('particle-api-js');
var particle = new Particle();

 function output(data) {
 hook.res.end(JSON.stringify(data, true, 2));

 var token = 'YOUR TOKEN';

 console.log(hook.params.result.parameters.action + " coffee request recieved");


 var fnPr = particle.callFunction({ deviceId: 'YOUR DEVICE ID', name: 'warmmachine', argument: 'D0:HIGH', auth: token });;

 function(data) {
 console.log("success called warmcoffee succesfuly");
 // output('Function called succesfully:', data);
 myResponse =new MyResponse("Warming coffee machine for you","Coffee Machine Being Warmed","Coffee Machine");
 hook.res.writeHead(200, {"Content-Type": "application/json"});

 hook.res.end(JSON.stringify(myResponse, true, 2));

 }, function(err) {
 output('An error occurred:', err);

 myResponse =new MyResponse("Warming coffee machine for you","Coffee Machine Being Warmed","Coffee Machine");

 }else if (hook.params.result.parameters.action=="make")

 var fnPr = particle.callFunction({ deviceId: 'YOUR DEVICE ID', name: 'makecoffee', argument: 'D0:HIGH', auth: token });;

 function(data) {
 console.log("success called make coffee successfully");
 myResponse =new MyResponse("Making coffee for you","Coffee Being made ","Coffee Machine");
 hook.res.writeHead(200, {"Content-Type": "application/json"});

 hook.res.end(JSON.stringify(myResponse, true, 2));

 }, function(err) {
 output('An error occurred:', err);

 myResponse =new MyResponse("Making coffee for you","Coffee Being made ","Coffee Machine");


 console.log("****************end end end end end end ");

 function MyResponse(aSpeech,aDisplaytext,aSource){
 this.speech = aSpeech;
 this.displayText= aDisplaytext;



function warmCoffeeMachine()

 console.log("machine is warming");

function makeCoffeeMachine()
 console.log("machine is making");

Automating the Nespresso Coffee Machine Part 3



Step 1: 

Follow the steps in my previous blog entry to connect the Nespresso machine to the internet using particle photon and a servo.

Step 2: 

Now we build an API.AI agent to handle the natural language request translation, this would require building an agent and training it to understand the requests and how to translate them.

Create Agent

Each agent should have a specific domain, in our case the agent’s domain will be home automation.


Create new Intent make.coffee

The intent is the action flow for the agent, it will include the agent training set and the expressions it is to expect and the parameters it expects to gather before invoking an action. In our case the device to use and whether to warm the device or actually make the coffee.

Create Entities 

Here we create the entities (also serve as parameters) that the intent will use.

I defined two abstract entities, device and action.

I then created 1 device instance :Coffee machine and listed all the synonyms i usually use to refer to it.

Two actions were then defined : Warm and make, also with all the synonyms usually used to refer to them.

This slideshow requires JavaScript.

Configure intent make.coffee

Now we go back to the intent and add the user expressions (commands) I added some expressions such as :

  • can I have some coffee please
  • heat up the Nespresso machine
  • prepare the Nespresso machine
  • make me some coffee please

Make sure that the entities are mapped in your expressions. If not you can click on any of the terms and manually map it to the correct entity.


The next step is to define the required parameters for the actions, scroll to the actions section and expand it and add the required entities, in this case we have two required parameters : Device and Action. Also add what should the agent prompt the user if a parameter is missing. For instance you an ask the user for action if it was not picked up from the initial interaction or the device if it was not mentioned.

Fill the action field with a name that can be used by the backend code to execute the user’s request such as : coffeemachine, this will be used by our backend to understand how to execute it along with the parameters.


entities_Action_prompt .png


Scroll down to the full filment section and tick the use webhooks checkbox.



Go to the response section and add a response such as : “working on it.” This will later be replaced by the webservice response, or if the service timed out.

Test Your agent

In theory the agent is ready now to translate requests so its time for some QA, the test scenarios you should try are a fully completed request and a partial request to check whether the more info question prompt would be triggered or not.

The JSON interpretation of the request should include in the result object the request interpretation in this case

Action ->  warm

coffee_machine ->  coffee machine

This slideshow requires JavaScript.

Integrate with FB chat 

This will allow you to talk with your bot through facebook chat. Follow the steps here-> How to link api.ai with FB chat.

Step 3 :

In this step we  set up the webhook endpoint that this agent is going to use as a back end

Go to the fulfillment tab and fill in the url as acquired while building the hook.io service (part2).

Now you are ready to go, test your bot again and you should get your coffee made for you.

Automating the Nespresso Coffee Machine part 1

I like drinking coffee first thing in the morning, however preparing coffee in the early morning is not something I’m a big fan of, so I decided to use my particle photon to automate the coffee making process. Thus turning my normal Nespresso machine to an IoT enabled machine.

The steps for this build are quite simple, it requires the following components :

  • Particle Photon
  • Servo
  • Power Bank
  • Paper Clip
  • Rubber Band
  • Velcro Tape

To build it connect the Servo to the photon, noting that the data pin must be connected to one of the PWM enabled pins, in my build I used Pin D0.

Use the paper clip to twist and attach on the servo keeping it in place using the rubber band, this serves as the push rod that’ll press on the coffee maker’s button.


Attach the servo to the Nespresso machine using velcro and tape, you’ll need to adjust the servo angles on the code to work with your machine and how the servo is attached to its body.

Connect it to IFTTT service and enjoy your coffee, Personally I then created a DO button on my phone to press the first thing in the morning.

Continue reading

Downloading Torrents Remotely

I’ve set up a home server and wanting a simple straight forward way to download torrents remotely I relied on an old hack I’ve heard about but never attempted. You can configure transmission the torrents client to pick up torrents files from a certain directory, sharing this directory on google drive means that you can drop torrent files for transmission to download for you at home.

Screen Shot 2016-03-15 at 4.21.56 PM.png

The problem is you can’t really monitor the progress of the torrents and some of these torrent files may not even start. So I decided to write a small shell script that monitors two events and updates me through push bullet, the first event being the torrent download start (creation of a *.part file) the second event is the completion of the download (new file in the downloads directory).

The script works as follows:

Screen Shot 2016-03-15 at 4.15.30 PM.png


for i in $(find $partsLocation -name "*.part" -maxdepth 1|sed -e "s/ /_/g");
echo "Handling File"
echo $i;
echo "----------------------"

#apply check here if file exists
countOfParts=$(cat parts.log|grep $i|wc -l)
echo $countOfParts
if [ $countOfParts -gt 0 ]
echo "already listed"
echo "new file, adding to parts.log"
echo $i >> parts.log
curl -u $pushBulletAPI: https://api.pushbullet.com/v2/pushes -d type=note -d title="Tor Started" -d body="Download started for file $i"

#scan for complete files

for i in $(find $completeLocation ! -name '*.part' ! -name '*.log' ! -name "*.sh*" ! -name "." -maxdepth 1|sed -e "s/ /_/g");
echo "Handling File"
echo $i;
echo "----------------------"
#apply check here if file exists
countOfComplete=$(cat complete.log|grep $i|wc -l)
echo $countOfComplete
if [ $countOfComplete -gt 0 ]
echo "already listed"
echo "new file, adding to complete.log"
echo $i >> complete.log

curl -u $pushBulletAPI: https://api.pushbullet.com/v2/pushes -d type=note -d title="Tor Completed" -d body="Download Completed for file $i"

Deploying A Mobile APP in an Enterprise Environment

The use of mobile application on consumer grade devices is increasing in popularity as more and more companies are using customised apps on mobile devices for field purposes instead of using a purpose built device. Example of such an implementation can range from Biometrics Scanning, Merchandise Delivery or even Taxi Dispatching.


Certain risks are associated with this approach since unlike web applications each user is responsible for managing and updating his version, just like the pre web-apps days back when people used desktop applications, These risks include:

  1. Using an obsolete version of the app that is no longer compatible with the backend.
  2. Using a version of the app that include a critical security issue.
  3. Incorrect business process due to the use of an older version of the app.
  4. Using a none official version of the app using the same backend, thus bypassing any front end validations.

There are certain guidelines that can be followed to control the inherent risk, I’m going to list some of them here as a guideline

I. Upgrade Enforcement 

For critical upgrades that renders the previous versions obsolete, for instance changing the business process or introducing a critical security enhancement. The best practice is to break the backends backward compatibility.

Breaking the backend backward compatibility can be done by adding an app version check with every request, including the app version in every request is easy and has a negligible cost on both data and processing yet is very useful when needed. The server response should include an error code that would trigger a “You Must Upgrade Now” message.

II. Upgrade Notification

For less critical updates push notifications can be used to suggest an upgrade to the customer, a more aggressive approach  (for android) would be to handle the push to fire the play store on the application’s URI. The frequency of these notifications can reflect how important this update is.

III. Application Verification

To restrict against none official apps the api should include a verification token, there are many ways to implement this, one of the easiest way would be encrypting one of the fields (timestamp for instance) and sending both the encrypted as well as the none encrypted versions. The backend would then verify the version of the app by comparing the decrypted field against the plain text one, if they do not match the response should indicate that.

There are many ways to implement this, the encrypted field approach happens to be the easiest way to do it.

IV. Root Check/Emulator Check

Rooted phones can offer a malicious user the means to manipulate the backend calls, while keeping the verification field. A root check can be conducted on the device every time an activity is started. Emulators are easy to uncover as well.

V. Malicious Usage checks

Just in case all of your checks fail, backend should conduct even a rudimentary malicious behaviour check, blocking devices that exhibit none expected behaviour.

VI. Connectivity Issues

Even with the advances in cellular service coverage 3G/4G service remains spotty especially in rural areas. There are few ways to mitigate this depending on the nature of the requirements. If no online/sync operations are required you can implement a simple request cashing service, in which server side requests are cashed to be retried when connectivity is available.

VII. Usage Patterns Analytics (for android)

Creating usage heat maps is important when it comes to determining how the people are actually using the applications and whether certain features should be augmented or removed due to lack of usage. Luckily google analytics can be integrated to track usage or activity launches, it can even be used to track individual controls actions.


I hope this post was helpful I’m planning to write another post soon on how to conduct unit, QA and scale tests on enterprise apps.

Extracting DM Images Over Twitter API

Extracting images from received DMs over twitter API has proven to be rather tricky and there aren’t enough information on how to do it. In this entry I’m going to explain how to achieve that programatically, using both curl as well as twitter4j.

*Note: you have to make sure that the app you are using has “Read, Write and Access direct messages” permission, other wise you’ll get a “HTTP/1.1 401 Authorization Required” error. 

Step 1: Get the Image ton URL

Within DMs images are represented as media entities, the media entity has several links, the one you need is “media_url”. the https://ton.* link

Screen Shot 2015-10-25 at 11.30.25 AM

 Step2 : Use the link to download the image 

Using Twitter4j: 

Call twitter.getDMImageAsStream(“TON URL“), this will return an input stream of the image.

Using Curl:

Put the URL in the get command with the standard twitter OAuth headers. Screen Shot 2015-10-25 at 11.38.08 AM