The Indian Aadhaar Biometric Miracle

I belong to a generation that has forsaken the pen for a keyboard and hence the skills required to maintain a static signature is long gone. Personally I’ve never trusted identity verification by matching a squiggly line on a piece of paper, it just doesn’t feel accurate enough, Trying to visually match someone’s appearance to a tiny black and white photo on a government issued identification is even less accurate. Luckily biometric verification is slowly replacing such old none accurate methods, already many countries adopted them for border control however very few countries aren’t using biometric methods past the point of entry. In comes the Indian Aadhaar project, In few years the Indian government was able to accomplish nothing short of a miracle, collecting the biometric details (fingerprints and iris scans) of more than 1.1Billion citizen a feat that reportedly required a $1Billion investment to achieve. Then gradually introducing biometric verification for many of the daily functions to replace the existing less than accurate methods.

The relatively low price of the fingerprint scanner allowed it to become a ubiquitous and cheap identity verification tool, it is reported that around more than 20 Million checks are done everyday thats around 30000 request per minute (12 hours day), one can only wonder about the SLA requirements for that system or the architecture they are using to conduct that, the checks are all done in real time over public web services. In India such a system is paying for itself as the government was able to close the subsidy leakages that were being dispensed to ghost citizens whom existed only on paper. This has been applauded in an economist article stating that India has leapfrogged every country except Estonia. This is an inaccuracy as Saudi Arabia has already implemented such a system successfully, in Saudi Arabia phone lines can only be sold after an online fingerprint verification done at the shop selling the line, this is done through an online web-service in realtime.

However both the Indian and the Saudi programs are far from perfect, the finger print verification technology has certain challenges that both programs have inherited not all people have clear fingerprints and often people who engage in manual labour have finger prints that can’t be verified, More importantly online verification requires a prevalent internet access to work which isn’t always suitable for remote areas. The first problem can be resolved as iris scanners become cheaper and easier to use as iris imprints are much more accurate and stable than fingerprints. The internet access however is a challenge that can only be resolved by a huge infrastructure investment which in my opinion is the largest hidden cost for such a program.

I wonder if we are going to see this implemented in Egypt any time soon. The digitization of the identity verification.

MicroCule, A Love Story

Often I find myself in a situation that requires a quick and dirty custom micro-service to support one Proof-Of-Concept or another, with a wide array of NPM available module I usually resort to nodejs and in few minutes I’d have a nodejs script that does exactly what I wanted until just few weeks ago the only option I had for hosting that service would have been hook.io, a free micro-services hosting service that provides decent yet none stellar performance QoS such as with any cloud based free service often things didn’t work at the required performance level and sometimes the nodejs module I wanted wasn’t available on the server, however other than starting my own app engine and installing all the containers and all the associated hassle I had to make do with whatever was generously offered by hook.io.

In comes Micro-Cule,  Micro-cule is a Software Development Kit and Command Line Interface for spawning streaming stateless HTTP microservices for any programming language or arbitrary binary.

Installed on my amazon micro instance it doesn’t take more than one, yes ONE command to spawn a fully qualified micro-service to do my bidding, and here is the fun part it supports 20 different languages, so basically its like self hosting a very lean easy to manage middleware that can turn any of your hacked scripts into webservices. Micro-Cule is part of hook.io project but it offers the core service without the whole hook.io array of bells and whistles which I think is a very smart move from hook.io people, given that most of the potential users would just want to run their own webservices rather than offer a webservices hosting service.

I’m in love with microcule given how it has liberated me from heroku, google apps, amazon lambda and the even more cumbersome self hosted solutions and For all means and purposes i think microcule is the perfect webservices hosting solution for prototyping, testing and development, perhaps even production with some careful configuration.

node js script to send email over SMTP

I’ve had this challenge for which the only solution was building an nodejs webhook adapter for my smtp server. Using webhooks and the following script I was able to enable API access for my email server (which it lacked).

 

 

const SMTPConnection = require(‘smtp-connection’);
var options = {
host: ‘smtp.gmail.com’,

port: 465,
secure: true,
// requireTLS: true,
ignoreTLS: true,
debug: true
};
var envelope = {
from: ‘<user>@gmail.com’,
to: ‘badr42@gmail.com’,

};

var auth = {
user: ‘<user>@gmail.com’,
pass: ‘<pass>’,

};
var message=”hello world”

let connection = new SMTPConnection(options);
connection.connect(function(err, connect){
if(err){
console.log(“error connecting”);
console.log(err);
}else{
console.log(“attempting to login”);
console.log(connect);

connection.login(auth, function(err, connect){
if(err){
console.log(“error logging in”);
console.log(err);
}else{
console.log(“logged in”);
console.log(connect);
connection.send(envelope, message, function(err, result){
if(err){
console.log(err);
}else{
console.log(“email sent”);
console.log(result);
connection.quit();
}
});

}
});

}
});

Whats a Business Architect

TOGAF divides enterprise architecture into 4 domains: Business, Data, Application and Technology. The most illusive to define being the business architecture domain, the business architect role is often reduced to that of a business analyst’s thus missing the added value a business architect can bring into an organization. A business architect is the most important role when it comes to the future of the organization, as he works on the “what” part of the future while technology most work on the “how” part. A business architect also serves to bridge the gap between the different silos making sure there is a wholistic vision that incorporates all the needs of the organization and the IT strategy to match it.

Lately I came across what must be the best business architect’s role definition in “Be the Business : CIOs in the New Era of IT” in this must read for any aspiring CIO the author defines a business architect as someone who architects the future vision of the business; someone who looks at where the overall enterprise is going. Business architects need to be able to think strategically, but equally as important, they need to make that strategy actionable. She goes on to list what she looks for in business architects, “They can think conceptually, abstractly, and they speak the language of the business. But I’m also looking for people who have a systems architecture background, so that they understand how systems work together. It’s a tough skill-set, and because of that, we augment the team with some outside resources.”

I believe as more organizations become more technology centric and with the unique vantage point IT has across all silos and domains the role of the business architect is going to gain prominence over the next few years as a cross functional blended executive who is able to both represent business within technology circles as well as representing technology within business circles.

BSS Next Horizon, Intelligent BSS

Business support systems (BSS) is one of the main pillars of any telecom it has been around and evolving for a while now and we are at the point where is the next horizon is just around the corner. BSS was founded on the premise that the telecom knows whats best for the customers, products and services are built based on what the telecom expects would fit most of the customers similar to mass produced products where the most common dominator is sought after, a product which is acceptable by most, loved by some, hated by a few.

Globally the new trend is tailored products, with millennial having high expectations and a sense of individuality that mass produced products fail to satisfy, this trend can be seen with the rise of tailor made custom products that rely on a social angle to sell, allowing the customers to express their individuality and their custom built fine tuned products.

Forbes article on millennials

To meet these new expectations telecoms are shifting towards a more nimble stack that would allow for customer tailored products and rapid services release, telecom vendors are picking up the trend and are proposing a smarter more intelligent framework such as Huwaei BES (Business Enabling Systems) and Ericsson’s DBSS (Digital BSS).

roads3

ROADS is a concept Huwaei is promoting which in my opinion is a very concise way to describe what the next generation BSS should include.

Real-Time: Promos, Product Recommendations, Order Fulfillment should all take place in realtime.

On-Demand: Products and services can be tailored to match the customer’s need on the fly, Soft bundles that can be built on the fly as needed with a micro-services flexible architecture to cope with it.

All-Online: Anything the customer needs should be possible to do online in a channel agnostic way, with an Omni-channel approach to empower the customer to conduct his requests using the channel most suitable to his lifestyle.

Do It Yourself: Customers should be able to manage and use their accounts with Minimal to no manual interaction with provider, as with google products customers should be empowered to resolve any issue or requirement they have. Customers also should be product and services definition process.

Social: Social friendly product that is compatible with the social infrastructure that currently makes the internet. Customer should be able to exchange or even sell their surpluses through social channels. Gasification features that drives engagement and increases customer loyalty should be built into the product’s DNA.

This concept along with others allow the telecoms to meet the new challenges posed by a shifting market in which the customer needs are drastically different to what was the norm for the last few decades.

 

Automating the Nespresso Coffee Machine part 2

In this part I explain how to hook up api.ai agent with particle using hook.io as middleware.

Hook.io will get the invocation call from api.ai and acts based on the action and action parameters by calling the correct function on the particle cloud. The responding with the api.io payload to be displayed to the requester.

The first step is building a new hook on hook.io and pasting the following script, modify the access token and device id.


module['exports'] = function coffeeNator (hook) {
 var myResponse;
 var Particle = require('particle-api-js');
var particle = new Particle();

 function output(data) {
 hook.res.end(JSON.stringify(data, true, 2));
 }

 var token = 'YOUR TOKEN';

 console.log(hook.params.result.parameters.action + " coffee request recieved");

 if(hook.params.result.parameters.action=="warm")
 {
 warmCoffeeMachine();

 var fnPr = particle.callFunction({ deviceId: 'YOUR DEVICE ID', name: 'warmmachine', argument: 'D0:HIGH', auth: token });;

 fnPr.then(
 function(data) {
 console.log("success called warmcoffee succesfuly");
 // output('Function called succesfully:', data);
 myResponse =new MyResponse("Warming coffee machine for you","Coffee Machine Being Warmed","Coffee Machine");
 hook.res.writeHead(200, {"Content-Type": "application/json"});

 hook.res.end(JSON.stringify(myResponse, true, 2));

 }, function(err) {
 output('An error occurred:', err);
 });

 myResponse =new MyResponse("Warming coffee machine for you","Coffee Machine Being Warmed","Coffee Machine");

 }else if (hook.params.result.parameters.action=="make")
 {
 makeCoffeeMachine();

 var fnPr = particle.callFunction({ deviceId: 'YOUR DEVICE ID', name: 'makecoffee', argument: 'D0:HIGH', auth: token });;

 fnPr.then(
 function(data) {
 console.log("success called make coffee successfully");
 myResponse =new MyResponse("Making coffee for you","Coffee Being made ","Coffee Machine");
 hook.res.writeHead(200, {"Content-Type": "application/json"});

 hook.res.end(JSON.stringify(myResponse, true, 2));

 }, function(err) {
 output('An error occurred:', err);
 });

 myResponse =new MyResponse("Making coffee for you","Coffee Being made ","Coffee Machine");

 }

 console.log("****************end end end end end end ");

 function MyResponse(aSpeech,aDisplaytext,aSource){
 this.speech = aSpeech;
 this.displayText= aDisplaytext;
 this.source=aSource;

 }

};

function warmCoffeeMachine()
{

 console.log("machine is warming");

}
function makeCoffeeMachine()
{
 console.log("machine is making");
}

Automating the Nespresso Coffee Machine Part 3

Architecture 

screen-shot-2016-12-24-at-12-27-08-am

Step 1: 

Follow the steps in my previous blog entry to connect the Nespresso machine to the internet using particle photon and a servo.

Step 2: 

Now we build an API.AI agent to handle the natural language request translation, this would require building an agent and training it to understand the requests and how to translate them.

Create Agent

Each agent should have a specific domain, in our case the agent’s domain will be home automation.

screen-shot-2016-12-23-at-1-30-21-pm

Create new Intent make.coffee

The intent is the action flow for the agent, it will include the agent training set and the expressions it is to expect and the parameters it expects to gather before invoking an action. In our case the device to use and whether to warm the device or actually make the coffee.

Create Entities 

Here we create the entities (also serve as parameters) that the intent will use.

I defined two abstract entities, device and action.

I then created 1 device instance :Coffee machine and listed all the synonyms i usually use to refer to it.

Two actions were then defined : Warm and make, also with all the synonyms usually used to refer to them.

This slideshow requires JavaScript.

Configure intent make.coffee

Now we go back to the intent and add the user expressions (commands) I added some expressions such as :

  • can I have some coffee please
  • heat up the Nespresso machine
  • prepare the Nespresso machine
  • make me some coffee please

Make sure that the entities are mapped in your expressions. If not you can click on any of the terms and manually map it to the correct entity.

expressions

The next step is to define the required parameters for the actions, scroll to the actions section and expand it and add the required entities, in this case we have two required parameters : Device and Action. Also add what should the agent prompt the user if a parameter is missing. For instance you an ask the user for action if it was not picked up from the initial interaction or the device if it was not mentioned.

Fill the action field with a name that can be used by the backend code to execute the user’s request such as : coffeemachine, this will be used by our backend to understand how to execute it along with the parameters.

actions_required.png

entities_Action_prompt .png

 

Scroll down to the full filment section and tick the use webhooks checkbox.

enable_fulfilment-in-entnty

Response

Go to the response section and add a response such as : “working on it.” This will later be replaced by the webservice response, or if the service timed out.

Test Your agent

In theory the agent is ready now to translate requests so its time for some QA, the test scenarios you should try are a fully completed request and a partial request to check whether the more info question prompt would be triggered or not.

The JSON interpretation of the request should include in the result object the request interpretation in this case

Action ->  warm

coffee_machine ->  coffee machine

This slideshow requires JavaScript.

Integrate with FB chat 

This will allow you to talk with your bot through facebook chat. Follow the steps here-> How to link api.ai with FB chat.

Step 3 :

In this step we  set up the webhook endpoint that this agent is going to use as a back end

Go to the fulfillment tab and fill in the url as acquired while building the hook.io service (part2).

Now you are ready to go, test your bot again and you should get your coffee made for you.