Design or Develop

A while ago I was explaining the architecture of a solution I have developed when he made a remark about how my solution was based on the integration of off the shelf Solutions (OTS) rather than a custom development and he added how a custom development would have been more impressive and efficient not to mention superior in every way. Being a solution architect that’s a question I have to answer at the beginning of almost every project I am responsible for and the answer is almost always customize and integrate, in this post I’m going to explain the logic behind such as decision.

  1. Someone has done it better: you might have access to the best developers but unless you are developing something within the core technology of your company you don’t have the accumulated experience someone else has especially if that solution is within their core technology area. When you limit yourself to in house developed solution you also limit yourself to the skills you’ve got within the confines of your company.
  2. Is it really custom development: unless you are writing machine language or using your own compiler you are actually customising the only difference is the building block size. Developers use libraries and packages that someone with more experience has built and most use it as a black box without fully understanding it’s inner workings and they really shouldn’t.
  3. Cost: in theory you can build everything if you have no cost and time constraints however unless it’s a toy or personal project cost and time constraints are of key importance. The more customization requires the more costly in terms of time and money a project becomes. Licenses cost money but almost always they are cheaper than the investment needed to build the same functions in house. There is also a hidden costs associated with the involved risks and the bugs that we’ll be discovered during QA and initial customer experience.
  4. Accumulated experience: vendors building OTS solution amount the combined experience of their customers which means they’ve covered more use cases and edge cases than the your in house development team ever will this experience is merged into the patches and product updates.
  5. Compliance: it’s a lot easier to comply with certification criteria when using larger building blocks (OTS) as the certification bodies are familiar with the OTS solutions and the vendors are usually precertified. Trying to certify an in houe developed solution can be both expensive and time consuming.
  6. Integration best practices: this point is a bit tricky to explain when in house solution is used the developers use integration short cuts to save time that are often far from the industry best practices, using OTS solutions enforces a certain level of best practices conformity this is valuable as a at a later stage components can be swapped out at a minimum impact.
  7. Operability & Extensibility: Even if you have the ultimate bespoke solutions built specifically for you, finding people to operate it or build on top if it in the future might prove to be problematic especially if the original team who built it moved on.

So these are the reasons why you should almost always go for customization of OTS rather than in house solutions and using larger building blocks with micro services, how about the situations where you should in fact build your own solution and going for the smallest building block possible (language default libraries)? Thats my next post.

The Indian Aadhaar Biometric Miracle

I belong to a generation that has forsaken the pen for a keyboard and hence the skills required to maintain a static signature is long gone. Personally I’ve never trusted identity verification by matching a squiggly line on a piece of paper, it just doesn’t feel accurate enough, Trying to visually match someone’s appearance to a tiny black and white photo on a government issued identification is even less accurate. Luckily biometric verification is slowly replacing such old none accurate methods, already many countries adopted them for border control however very few countries aren’t using biometric methods past the point of entry. In comes the Indian Aadhaar project, In few years the Indian government was able to accomplish nothing short of a miracle, collecting the biometric details (fingerprints and iris scans) of more than 1.1Billion citizen a feat that reportedly required a $1Billion investment to achieve. Then gradually introducing biometric verification for many of the daily functions to replace the existing less than accurate methods.

The relatively low price of the fingerprint scanner allowed it to become a ubiquitous and cheap identity verification tool, it is reported that around more than 20 Million checks are done everyday thats around 30000 request per minute (12 hours day), one can only wonder about the SLA requirements for that system or the architecture they are using to conduct that, the checks are all done in real time over public web services. In India such a system is paying for itself as the government was able to close the subsidy leakages that were being dispensed to ghost citizens whom existed only on paper. This has been applauded in an economist article stating that India has leapfrogged every country except Estonia. This is an inaccuracy as Saudi Arabia has already implemented such a system successfully, in Saudi Arabia phone lines can only be sold after an online fingerprint verification done at the shop selling the line, this is done through an online web-service in realtime.

However both the Indian and the Saudi programs are far from perfect, the finger print verification technology has certain challenges that both programs have inherited not all people have clear fingerprints and often people who engage in manual labour have finger prints that can’t be verified, More importantly online verification requires a prevalent internet access to work which isn’t always suitable for remote areas. The first problem can be resolved as iris scanners become cheaper and easier to use as iris imprints are much more accurate and stable than fingerprints. The internet access however is a challenge that can only be resolved by a huge infrastructure investment which in my opinion is the largest hidden cost for such a program.

I wonder if we are going to see this implemented in Egypt any time soon. The digitization of the identity verification.

MicroCule, A Love Story

Often I find myself in a situation that requires a quick and dirty custom micro-service to support one Proof-Of-Concept or another, with a wide array of NPM available module I usually resort to nodejs and in few minutes I’d have a nodejs script that does exactly what I wanted until just few weeks ago the only option I had for hosting that service would have been hook.io, a free micro-services hosting service that provides decent yet none stellar performance QoS such as with any cloud based free service often things didn’t work at the required performance level and sometimes the nodejs module I wanted wasn’t available on the server, however other than starting my own app engine and installing all the containers and all the associated hassle I had to make do with whatever was generously offered by hook.io.

In comes Micro-Cule,  Micro-cule is a Software Development Kit and Command Line Interface for spawning streaming stateless HTTP microservices for any programming language or arbitrary binary.

Installed on my amazon micro instance it doesn’t take more than one, yes ONE command to spawn a fully qualified micro-service to do my bidding, and here is the fun part it supports 20 different languages, so basically its like self hosting a very lean easy to manage middleware that can turn any of your hacked scripts into webservices. Micro-Cule is part of hook.io project but it offers the core service without the whole hook.io array of bells and whistles which I think is a very smart move from hook.io people, given that most of the potential users would just want to run their own webservices rather than offer a webservices hosting service.

I’m in love with microcule given how it has liberated me from heroku, google apps, amazon lambda and the even more cumbersome self hosted solutions and For all means and purposes i think microcule is the perfect webservices hosting solution for prototyping, testing and development, perhaps even production with some careful configuration.

node js script to send email over SMTP

I’ve had this challenge for which the only solution was building an nodejs webhook adapter for my smtp server. Using webhooks and the following script I was able to enable API access for my email server (which it lacked).

 

 

const SMTPConnection = require(‘smtp-connection’);
var options = {
host: ‘smtp.gmail.com’,

port: 465,
secure: true,
// requireTLS: true,
ignoreTLS: true,
debug: true
};
var envelope = {
from: ‘<user>@gmail.com’,
to: ‘badr42@gmail.com’,

};

var auth = {
user: ‘<user>@gmail.com’,
pass: ‘<pass>’,

};
var message=”hello world”

let connection = new SMTPConnection(options);
connection.connect(function(err, connect){
if(err){
console.log(“error connecting”);
console.log(err);
}else{
console.log(“attempting to login”);
console.log(connect);

connection.login(auth, function(err, connect){
if(err){
console.log(“error logging in”);
console.log(err);
}else{
console.log(“logged in”);
console.log(connect);
connection.send(envelope, message, function(err, result){
if(err){
console.log(err);
}else{
console.log(“email sent”);
console.log(result);
connection.quit();
}
});

}
});

}
});

Whats a Business Architect

TOGAF divides enterprise architecture into 4 domains: Business, Data, Application and Technology. The most illusive to define being the business architecture domain, the business architect role is often reduced to that of a business analyst’s thus missing the added value a business architect can bring into an organization. A business architect is the most important role when it comes to the future of the organization, as he works on the “what” part of the future while technology most work on the “how” part. A business architect also serves to bridge the gap between the different silos making sure there is a wholistic vision that incorporates all the needs of the organization and the IT strategy to match it.

Lately I came across what must be the best business architect’s role definition in “Be the Business : CIOs in the New Era of IT” in this must read for any aspiring CIO the author defines a business architect as someone who architects the future vision of the business; someone who looks at where the overall enterprise is going. Business architects need to be able to think strategically, but equally as important, they need to make that strategy actionable. She goes on to list what she looks for in business architects, “They can think conceptually, abstractly, and they speak the language of the business. But I’m also looking for people who have a systems architecture background, so that they understand how systems work together. It’s a tough skill-set, and because of that, we augment the team with some outside resources.”

I believe as more organizations become more technology centric and with the unique vantage point IT has across all silos and domains the role of the business architect is going to gain prominence over the next few years as a cross functional blended executive who is able to both represent business within technology circles as well as representing technology within business circles.

BSS Next Horizon, Intelligent BSS

Business support systems (BSS) is one of the main pillars of any telecom it has been around and evolving for a while now and we are at the point where is the next horizon is just around the corner. BSS was founded on the premise that the telecom knows whats best for the customers, products and services are built based on what the telecom expects would fit most of the customers similar to mass produced products where the most common dominator is sought after, a product which is acceptable by most, loved by some, hated by a few.

Globally the new trend is tailored products, with millennial having high expectations and a sense of individuality that mass produced products fail to satisfy, this trend can be seen with the rise of tailor made custom products that rely on a social angle to sell, allowing the customers to express their individuality and their custom built fine tuned products.

Forbes article on millennials

To meet these new expectations telecoms are shifting towards a more nimble stack that would allow for customer tailored products and rapid services release, telecom vendors are picking up the trend and are proposing a smarter more intelligent framework such as Huwaei BES (Business Enabling Systems) and Ericsson’s DBSS (Digital BSS).

roads3

ROADS is a concept Huwaei is promoting which in my opinion is a very concise way to describe what the next generation BSS should include.

Real-Time: Promos, Product Recommendations, Order Fulfillment should all take place in realtime.

On-Demand: Products and services can be tailored to match the customer’s need on the fly, Soft bundles that can be built on the fly as needed with a micro-services flexible architecture to cope with it.

All-Online: Anything the customer needs should be possible to do online in a channel agnostic way, with an Omni-channel approach to empower the customer to conduct his requests using the channel most suitable to his lifestyle.

Do It Yourself: Customers should be able to manage and use their accounts with Minimal to no manual interaction with provider, as with google products customers should be empowered to resolve any issue or requirement they have. Customers also should be product and services definition process.

Social: Social friendly product that is compatible with the social infrastructure that currently makes the internet. Customer should be able to exchange or even sell their surpluses through social channels. Gasification features that drives engagement and increases customer loyalty should be built into the product’s DNA.

This concept along with others allow the telecoms to meet the new challenges posed by a shifting market in which the customer needs are drastically different to what was the norm for the last few decades.

 

Automating the Nespresso Coffee Machine part 2

In this part I explain how to hook up api.ai agent with particle using hook.io as middleware.

Hook.io will get the invocation call from api.ai and acts based on the action and action parameters by calling the correct function on the particle cloud. The responding with the api.io payload to be displayed to the requester.

The first step is building a new hook on hook.io and pasting the following script, modify the access token and device id.


module['exports'] = function coffeeNator (hook) {
 var myResponse;
 var Particle = require('particle-api-js');
var particle = new Particle();

 function output(data) {
 hook.res.end(JSON.stringify(data, true, 2));
 }

 var token = 'YOUR TOKEN';

 console.log(hook.params.result.parameters.action + " coffee request recieved");

 if(hook.params.result.parameters.action=="warm")
 {
 warmCoffeeMachine();

 var fnPr = particle.callFunction({ deviceId: 'YOUR DEVICE ID', name: 'warmmachine', argument: 'D0:HIGH', auth: token });;

 fnPr.then(
 function(data) {
 console.log("success called warmcoffee succesfuly");
 // output('Function called succesfully:', data);
 myResponse =new MyResponse("Warming coffee machine for you","Coffee Machine Being Warmed","Coffee Machine");
 hook.res.writeHead(200, {"Content-Type": "application/json"});

 hook.res.end(JSON.stringify(myResponse, true, 2));

 }, function(err) {
 output('An error occurred:', err);
 });

 myResponse =new MyResponse("Warming coffee machine for you","Coffee Machine Being Warmed","Coffee Machine");

 }else if (hook.params.result.parameters.action=="make")
 {
 makeCoffeeMachine();

 var fnPr = particle.callFunction({ deviceId: 'YOUR DEVICE ID', name: 'makecoffee', argument: 'D0:HIGH', auth: token });;

 fnPr.then(
 function(data) {
 console.log("success called make coffee successfully");
 myResponse =new MyResponse("Making coffee for you","Coffee Being made ","Coffee Machine");
 hook.res.writeHead(200, {"Content-Type": "application/json"});

 hook.res.end(JSON.stringify(myResponse, true, 2));

 }, function(err) {
 output('An error occurred:', err);
 });

 myResponse =new MyResponse("Making coffee for you","Coffee Being made ","Coffee Machine");

 }

 console.log("****************end end end end end end ");

 function MyResponse(aSpeech,aDisplaytext,aSource){
 this.speech = aSpeech;
 this.displayText= aDisplaytext;
 this.source=aSource;

 }

};

function warmCoffeeMachine()
{

 console.log("machine is warming");

}
function makeCoffeeMachine()
{
 console.log("machine is making");
}