Sunday, May 24, 2015

Will as-a-service business models lead to utility computing?

Just like humanity evolved from each human family building the things they needed to networks of supply chains leading from natural resources to markets through which customers bought the things they need, software services is evolving from even enterprise developing its operational IT platform to "as-a-service" IT platform interconnecting supply chains of providers of specialised algorithms and sources of different types of data (internal and external). Just like electricity/broadband comes through a point in the wall, the computing each house/office needs will come off a point in the wall. Home/work based devices will exchange data/computing-agents with the point in the wall and provide the interface to use/configure the computing capability. We will get monthly bills for the data/computing resources used and pay them using the same. This is the utility computing destination we are headed towards.

Utility computing will need "as-a-service" customising/configuring the computing needs as well as "as-as-service" operational support. There are many ways in which these can evolve. There is a huge amount of financial wastage in the current way of building and managing the evolving software needs of enterprises despite there being a lot common in the needs of enterprises around the world. Industry level thought is needed. New community, industry, country, global IT institutions are needed as part of the global IT operational framework to support the global utility computing model. One can of course start distributed and converge later. If one looks carefully, the process is already underway. It is the software industry which is lagging in terms of the way it does things and the tools/offerings it  provides.

Why should there be multiple implementations of each algorithm/code-component within the Internet as connectivity and computing becomes cheap and reliable? Why can't code for specific algorithm/code-component be sourced as needed from central repository? Why do customers have to pay for setup and operations of new IT platforms - assemblies of such algorithms/code-component? Why can't they merely pay for their usage of services from the IT platform and expect a specific SLA for each service they use from it?

Things are moving towards this in substantial measure as ISV's seek to keep their product/services updated through better service oriented architectures. Enterprise IT platforms are beginning to get automatically updated with new versions of code components with minimum down time

Need to simplify aggregate global technosphere

As I have said multiple times before, complexity is increasing in day to day life. Daily lives of humans, which were dependent on natural uncertainties for the most part are increasingly dependent directly or indirectly on man-made uncertainties embedded in network of machines and/or computers. Some of the natural uncertainties like climate have changed with large impact, but man-made uncertainties continue to increase in number and complexity as the technosphere around humanity deepens.

I wish to reflect here about the changing nature of our ways and means of carrying out our activities in modern life. Computing power is becoming cheaper and easily available. The nature of problems to which computing is being applied is increasing. The number and types of algorithms being deployed are increasing. But there are certain complex categories of problems to which simple algorithms can be proved not to exist. The equivalence class of real-life problems where one encounters these limitations is significantly large by itself and I contend that as systems (of systems(of systems...)) increase, this equivalence class is increasing in size. Essentially there are more and more significant aspects of our day-to-day lives which are limited only by the computational complexity of these algorithms.

Now consider the complexity of enterprise IT architectures (including code and data) where such code and data is embedded in. I recently saw a Gartner video about why the only way that digital transformation can meet the goals of reducing TCO and improving agility of adapting to business change is to reduce complexity of IT architectures. Actions by CIO's to reduce the complexity of enterprise IT architectures while adapting to business change are the only way of reducing future cost of change, the need for which in a very dynamic business environment will always be high.

In my view every human needs to be actively involved in managing the complexity of the technosphere around him and how his/her technosphere connects to the enterprise technosphere, community technosphere, national technosphere and global technosphere. Unless we together actively manage the complexity in this inter-network of techosphere, the aggregate technosphere will increase in complexity thereby increasing the future cost of change and/or future inequality of service delivered through the technosphere. Every individual and enterprise need to do their bit to keep the aggregate technosphere simple. The service environment architecture that is emerging as the technosphere around each individual/home/car/factory also need to be managed and interconnected in ways which preserve good "network" properties (e.g. equality of opportunity and freedom of speech within the network), else our humanity might waste a lot of money later due to increased complexity to modify the network to create these properties.

Netneutrality illustrates type of principles we need to ensure and I am sure we will see more and more debates of the net neutrality kind as the Internet (the BIG NEW WORLD)  continues to be colonized, just like democracy emerged and grew in America during the colonization.

Regards

Pratap

13th May 2015

Saturday, May 23, 2015

Thinking of the As-a-Service Economy

I have really tried hard to understand the  emerging As-a-service economy. I am glad to say that I think I now understand the term better and agree that things are evolving in that direction. Let me explain what I have understood in my words.

Organisations use business service catalogues delivered to them through a mix of people and technology. Typically the business service catalogue has a back-to-back IT service catalogue. Business staff orchestrate the business services from the catalogue into business processes achieving business outcomes. IT staff orchestrate the IT services from the catalogue to support the business services. The agility (through flexible capability and flexible capacity) of the above business processes (and recursively those of its business services and IT services) and their cost are key to business success/failure. Business always wants more agility and ever lower costs. Depending on the people and technology portfolio of each organisation, its agility and cost get determined.

The As-a-service Nirvana is that the people portfolio is highly skilled, good at learning-unlearning-relearning and value-add focused supported by a technology portfolio which provides flexible capability and capacity. In the As-a-service Nirvana these are delivered through the IT service catalogue, business service catalogue forming the business processes supporting the business with a high degree of fast flexibility to vary services their providers as per the needs of the business to produce specific outcomes in the context of its business environment. The most important point about the As-a-service Nirvana is that it assumes a huge amount of automation and a huge amount of cloud usage, so that the cost structure options that BPaas/SaaS/Iaas can deliver are leveraged to the full.

Why is this important what is new about this?

Till date, the focus has been on each customer making one-time and on-going investments (and/or expenses) in infrastructure, software applications and (IT and business process) staff which tied up capital, made reducing people cost through offshoring/outsourcing the only way of reducing cost and reduced agility due to high cost/duration of change. It is a bit like building, maintaining and driving one's own manually driven car using T&M/Fixed price service and standard components and being locked into it when all you need is a car and Avis/Hertz can provide on-demand selection of car's with various configurations (including more automated self-driving ones) for a much lower operating cost overall. The key difference between this example and As-a-service Nirvana is that Avis/Hertz in the As-a-service economy will assemble the "car" on-demand from services provided by multiple providers and the next time you need a car, you can change any of these providers easily.

As I described in this article I think that things are moving towards utility computing provided through "standardised access" infrastructure just like electricity. Everyone does not need their own power generation, transmission infrastructure to use electricity. "Standardised "access" infrastructure is enough to leverage flexible computing as described in that article. As we reach there enterprises will primarily interconnect multiple "standardised access" infrastructure elements which separately/jointly plug into the computing coming through the wall. It is basically a more evolved way of living and it will locate the responsibilities in a more socially optimal manner.

Regards

Pratap Tambay

23 May 2015

Thursday, May 07, 2015

What is poetry? An elaborate definition

A long time ago, when I was at IITB, I thought hard about my poetry. What did it mean and what was its place in my life. What was the benefit of writing poetry to my life and to society? Those days there was a particular hindi song illustrating the regularity with which outstanding poets tended to be failures in practical life, which used to scare me. I had realised that I had some talent for writing but hadn't realised exactly how much I had. I knew I liked to write, needed to write and felt better by writing, but did not really understand my relationship with my writing, particularly poetry, which was the mainstay of my writing. But I was worried big time that I might become a failure in practical life like poets in many movies of those days were shown and like even the might Ghalib was a failure in practical life. One of the good things about living in IITB campus was that I was able to reflect deeply on what is poetry and what it meant to my life.

I realised that if one stays true to one's self and lives life on the front foot emotionally i.e. follows one's heart, one tends to be hurt far too easily. This was my experience too and I will skip the detail of that/those experience because that is not central to what I have to say. Essentially life is challenging and following one's heart leads to difficult emotional situations of various kinds. Some might be of not being sure, about so sure that one has doubts, of hope, of loss and lots and lots of other typical human emotions related to searching and finding one's goals and then achieving one's goals. I investigated myself deeply and found that I carried an emotional burden with me and wrote due to that emotional burden. Forcibly ditching the emotional burden did not seem possible and emotional burden tended to influence my life far too much in ways which were not always good. Even through writing the burden did not seem to go away. It seemed almost like my heart had some scars/lacunae which needed healing/filling. When I realised this, I also realised that I wanted to feel light and whole hearted again. But I did not know how this would happen but felt that poetry somehow would help me understand how this would happen. I decided that I would go deep into myself and make sense of what I felt deeply and write poetry about my deepest feelings. That helped me write some good poetry and helped release some of the burden. It helped me move on. I realised the truth of this song.

Writing poetry based on sense made out of one's deepest emotions (which might be shared with others for social/national/religious poetry) is articulating a problem and CHOOSING to view it in a particular way, which makes emotional and practical sense. I am writing this article for few friends who have mentioned that they do not get release from writing poetry. I KNOW that writing poetry helps one to move on. For these people who have tried and have not been able to move on, I would suggest that they explore themselves even more deeply and try to articulate the "problem" and review the various ways of looking at the "problem". If they do this, the heart and mind will tell them the right way of viewing the "problem", so that they can come out of it. There was a time when I felt good about poetry celebrating pain. Perhaps there are some people for whom, there is no way of viewing the "problem" in a way that provides release/meaning. But for most people, I think that viewing one's pain at a deep level differently can help find a way away from it to a richer experience of life. Most of the time, we never celebrate what we have and keep crying over what we do not and/or cannot have. Your problem and your view may vary. But going deep within oneself, reviewing one's pain and CHOOSING the right view can help release the pain.

So poetry is a way of making and sharing sense of the residue of one's experienced and imagined reality. It can help in healing scars and filling gaps so that one can experience life more deeply and whole heartedly. If one spends life with scars and gaps in the heart, one is likely to live lives of a particular kind. I believe that those who live more deeply and wholeheartedly are likely to be more socially responsible and sensitive. So poetry is a good thing. It is an intensely practical tool for anyone wishing to live a life emotionally on the front foot to get in and stay in touch with one's deepest self and live a life that really makes sense from the core, despite the very nature of life expressed in this song. Living a dignified and practical life is easier if one uses poetry as a tool.

Regards

Pratap

Sunday, April 19, 2015

Deep learning AI - current dangers

This article about deep learning AI scientist Geoffrey Hinton mentions expert opinions that doomsday scenarios of intelligent AI being destroying humanity scenario are more than 40 years away at the least, but focus on such doomsday scenarios are distracting attention away from other current dangers like the following.

"The National Security Agency in the U.S. has a huge amount of data at its fingertips. It would be shocking if it wasn’t using neural networks to make sense of it. The U.S. Department of Defence continues to fund AI research: how much autonomy can we as a society comfortably transfer to intelligent drones or robots? Appropriate boundaries for lethal autonomous weapons systems are an ongoing international debate. And if you’re already uncomfortable with ads that pick up keywords from your Facebook posts and email correspondence, you might not look forward to those systems getting smarter.

Then there’s the job question. Traditional computing replaced many menial tasks; neural nets are adept at navigating deep reservoirs of knowledge. Startups such as San Francisco-based Enlitic believe that deep learning algorithms can do a better, faster job of reading medical scans than the best-trained human beings."

  1. In June 2013, in this article, I expressed concern about the how most citizens are not aware about the technology capabilities available to their governments and how the secrecy about governmental technology capability development efforts reduces the control of citizens over ways and means governments can use to repress them if the governments decide to act despotically. Despots have an incentive to increase this secrecy to build technology to help them retain power.
  2. Degree of autonomous systems will increase and interactions between these autonomous systems will increase and humans might lose control/knowledge-of-control. I reflect on such scenarios in this article. I face one such scenario every morning when I walk through my living room littered with toys by my twin daughters earlier night. I never know which action by me will trigger unpredictable series of interconnected actions/reactions (of sound, lights, motion) between toys and have to be careful to avoid waking them up. Rogue software can complicate these scenarios as illustrated in this article.
  3. In this article, I talk about how it is important to regulate correctly to give citizens control over their data using correct technology, (something that is being done through data broker regulations and consumer privacy regulations being considered by USA) so that the scenario of leveraging ever advanced technology to process this data using deep learning AI will become illegal. Of course if this is not done or not done correctly, freedoms of citizens will be at risk. Their thoughts, words and actions will become vulnerable to influence from those mining their data and planting advertisements and/or other content.
  4. In this article, I talk about how the quickly emerging scenario of large scale job destruction due to technology is dangerous because our social sciences (including economics) are not able to predict the kind of society and state we will soon have as this happens. I worry that this might be intentional.
As deep learning AI technology in its various forms and scale accelerates, we will become more and more of a knowledge based society. Generating, leveraging and protecting intellectual property (algorithms and data) is already quite important and over time will become the most important differentiator between success and failure. Intellectual property has already become the organizing principle of humanity. Due to its nature, humanity will need to organise its affairs differently than at present to survive and thrive. This article talks about issues and challenges in protecting intellectual property and the implications for vulnerability of humanity. This article talks about whether ALL intellectual property should be allowed to be private irrespective of its nature.

When I wrote all these articles, I expected the issues they talk about to become important soon. So I am not surprised to find that they are becoming important. My only concern is that we are not ready to handle these issues. We need to do much more and soon.

Regards

Pratap Tambay

19 April 2015

Friday, April 17, 2015

Importance of the EU-Google case

#GoogleEU

As described in this article, "The European Union's competition commissioner filed a "statement of objections" on Wednesday that brings Google a step closer to facing legal sanctions under European law. The European Commission's specific allegation is a relatively narrow one — that the search giant has broken the law by giving Google Shopping a more favourable position in search results than other comparison shopping services — but the underlying policy issue is much broader. Following the logic of the EU complaint would require a massive transformation of Google's search product. The key point is that Google doesn't just give prime real estate to Google Shopping results. It unapologetically does it for products like Google Images, Google Maps, and Google News — all of which regularly show up in special boxes near the top of Google search results."

I recently wrote this article describing how the inter-network between humans and firms/institutions is becoming more and more centralised and few humans, firms and institutions are having increasingly disproportionate and unfair influence over the daily lives of the rest of humanity. Technology increases the ability of few to "serve" (and in effect control) the many. This is the real issue in the EU-Google case. As I pleaded in my article, humanity urgently needs to change democracy and markets to prevent small portions of humanity from dominating the rest using technology.

In a way, this goes into the fundamentals of our civilization as it stands today. Higher weightage in global wealth distribution has moved from physical property to intellectual property over the years. The limitations applicable to this intellectual property are different. In this article, I ask where should one draw the line between individual property, community property and humanity's property? Should'nt intellectual property which gives significant market power be managed differently from mundane intellectual property? How does one decide what rules should apply to which type of intellectual property?

John Stuart Mill has warned all who are interested in the maintenance of democracy, namely, not "to lay their liberties at the feet of even a great man, or to trust him with powers which enable him to subvert their institutions." There is nothing wrong in being grateful to great men who have rendered life-long services to the country. But there are limits to gratefulness. Similarly Irish Patriot Daniel O'Connel has said that no man can be grateful at the cost of his honour, no woman can be grateful at the cost of her chastity and no nation can be grateful at the cost of its liberty.

Democracy and markets are modes of association between humans and the above guidance applies to both in the context of gratefulness to Google for the benefits it's intellectual property has already generated and can generate for humanity. While Googles intellectual property's benefit to humanity is significant, surrendering significant market power to it cannot be justified. It is a threat to liberty and fair market competition. Any person, firm, institution which subverts the optimality of "aggregate social choice" happening through markets is just as dangerous as a great man entrusted with power to subvert the institutions of democracy.

The phenomena of SIB (systemically important Banks), SII (Systemically important insurers), the phenomenon of "too big to fail" corporates and its collateral phenomenon of "differential rewards to the guilty" are also caused by the increasing centralisation of the internetwork of humans and firms. Adam Smith's  "invisible hand" allocating resources to its uses generates reasonable outcomes for humanity here and now at small scale, but does not necessarily generate good outcomes far, later and large scale. Increasing centralisation of the network makes few visible hands uncontrolled by democratic forces dangerous for the survival and prosperity of humanity.

I predict that we should see more and more of such cases arising, since humanity is riding the technology tiger and the tiger is running fast into a deep, dark and dense jungle.

Regards

Pratap Tambay

17 April 2015

Friday, December 26, 2014

Idea's of India

As I have said before there are three competing idea's of India - Gandhi, Golwalkar and Ambedkar. We experienced Gandhi's idea of India all these years and are now experiencing Golwalkar's idea of India. All this while, Ambedkar's idea of India (substantially) lie in written but unimplemented form. Our actions are shaped by our views  about the desired future. Our  views of the desired future are shaped by the social and cultural messaging that we are exposed to. True free choice of our common future would involve systematically discussing and debating the alternative idea's of India and choosing the one we all feel comfortable with.

So what do you understand as the similarities and differences between the three ideas of India. What kind of India do we want?

Thursday, August 21, 2014

If I were 22 today

If I were 22 today, I would use the following principles as guidelines while planning my career.
1. I would build experience in stages starting from specific functional experience, widening out to high depth in one functional area and good understanding of other functional areas, so that a broad understanding of the end-to-end process of value delivery transactions is arrived at. Once end-to-end process is understood broadly, I would focus on deepening the understanding to identify how value gets generated (what is important and why) in the process. This typically takes years, unless one has help from books and mentoring.
2. Building technical skills is not enough. Communication, social, interpersonal, conceptual, relationship-building skills also need to be honed. I would leverage all opportunities to develop these skills
3. I would learn to identify and manage internal and external stakeholders in proposing and delivering end-to-end value delivery transactions. This involves planning, making and keeping committments, building and maintaining trust with people. It involves failing and learing from failures. Unless someone has failed substantiallly and learnt from it, Senior management normally do not trust that person with the most complex value delivery transactions. At the same time there is no better teacher than failure, so from 22 onwards I would volunteer for the most complex value delivery transactions and manage them very carefully independently or under guidance from seniors. I would try to learn from my own experience as well as that of my peers and seniors. One lives only once and the most you learn directly or indirectly from real life, the probablity of your success goes up. Being able to handle failure and bounce back from them is a very valuable asset. So do big things, fail early and learn quickly. In case you succeed from doing big things, you get the experience any way. But until you have had some degree of failure and learnt from it, don't trust your understanding of your limitations and manage them carefuly.
4. Careers are not built by shining independently. It is built by carrying people along. Team work and leadership is best learnt and honed by doing. Books, courses and mentoring are not enough. Being able to leverage others to do bigger and bigger things well is the most important thing to learn.
5. Sharpening the saw in the sense of keeping the skill set relevant is a top priority throughout career. If employer does not invest in me, then I need to invest in myself.
6. Milking a legacy skillset without preparing for the day when somebody moves my cheese is not smart career management.
Those of you who are 22 today, hope this helps. Please ask me on prataptambay@hotmail.com if you have questions.
--- Pratap Tambay

B2B Relationship Management

The Dwyer, Schurr and Oh model of Buyer-Seller relationships describes 5 phases in the development of B2B relationships (1) awareness, (2) exploration, (3) expansion (4) commitment, and (5) dissolution. While the exact roadmap varies, the high level topography of the relationship can be represented in the following diagram.


The dynamics of the relationship vary depending on the phase of the relationship. The value derived by both parties in a good relationship increases as the relationship deepens. This deepening of relationships involves building of personal trust between representatives of the seller and the representatives of the buyer which at an aggregate results in organizational trust between the two organizations. The primary job of B2B relationship managers is to understand the phase of the relationship and work towards building and maintaining organizational trust. Such trust presupposes value, so clarity on how value can be created for stakeholders on either side through the relationship is a key skill. Transactions involving promises of value followed by actual delivery of value help in building trust in the processes, products, people. As trust increases, the need and cost of transactions decrease (less redundant QC and governance, less wastage of negotiation time and costs, tighter integration between respective organization processes, products, people) to maximize end-to-end value delivered to customers of buyer. So the skill to consistently make and deliver on promises of value is a key skill for relationship managers.
But sometimes there are problems originating in either organization which slowly destroy trust, increasing transaction cost, reducing value delivered. Typically this happens when either the consistency of promises of value and actual value delivered decreases or when the communication around this misses the wood for the trees. I have faced this kind of situation multiple times. In each case, inevitably the solution is to go back to basics. Understand what value means to each stakeholder, improve consistency of promises made and actual delivery of those promises and communication regarding all these. Typically once the minimal repair is done, even if both parties restart talking together and agree a roadmap of the remaining repair (another promise), the executives on both side relax and if the roadmap is delivered, the relationship heals. Most relationships go through ups and downs and all the above happen to varying degrees over these ups and downs. I have had the good fortune to have turned around multiple customer relationships that I have worked on.
But there is one other aspect of buyer-seller relationships which needs attention. As described above, increasing trust increases integration between buyer and seller. There is then a need and opportunity for the seller to understand more and more of the buyers business and vice versa. This is a positive feedback loop and kickstarting it requires a particular level of trust to be reached as well as a particular level of competency to develop, without which the relationship cannot grow further. However if the trust and competency develops, it becomes neccessary and possible to work together to define joint go-to-market plans to benefit both parties. I have not had the good fortune yet to see any relationship go to this level successfully. It is my goal to reach this level with atleast a few of my clients over the next few years.

Facing BIG opportunities/risks in Career

I started my first proper job in Mumbai with the treasury of ICICI, a large Indian Project Finance Provider (those days), as a money market trader with additional responsibility for the interface to IT department. I left ICICI to build a treasury software product. I joined Logica to build a retail banking software product and moved to manage a money market trading software product at Trigyn. The first Career Curveball (#careercurveballs) came through while at Trigyn.
I was headhunted by VC's to join a startup called 3Genesis trying to wireless enable enterprises using 3G technologies. The offer was far too attractive, but potentially changing my sharp focus on financial services. I thought a lot and took it up. My experience of working in a typical startup, where everything is questioned every fortnight and everyone pulls their weight comes from there. While this experience added value, we were far too ahead of the game and had to move into hard core telecom software for the survival of the company. I suddenly was out of place, the guy with least immediately relevant background. My wife bought me a book on Telecom networks by Raj Pandya proactively and that set me on a journey. I read voraciously day and night asking lots of questions to whoever could answer. The speed at which I picked up deep Telecom impressed the current CTO of Techmahindra (Raju Wadalkar) so much that he nominated me as external examiner for 2 M.Tech Theses of IIT Bombay related to networks. I was as excited about the Parlay API's in Telecom then as I am excited today about OASIS in Insurance. Within in a short while I led from the front with the Parlay API's in winning two large complex projects and delivering one large complex product development and implementation project successfully. We retrained the J2EE experts I had hired into C++ and Telecom and delivered successfully disproving multiple nay-sayers. The product built then went on to huge number of other installations around the world. Unfortunately making software was easier than making money and I did not make much money as my huge ESOP became toilet paper.
But I think this experience has left me with the confidence to learn any new domain quickly and handle large complex projects, which was very useful when I joined NITL with little insurance domain experience. While the NIIT Technologies CEO had some doubt whether I was an appropriate hire, I was confident due to above experience. In fact, learning insurance was far too easy compared to learning Telecom because I have learnt how to learn a domain quickly. Essentially it involves understanding the key concepts/entities, relationships, business-flows/cycles and the the skill primarily lies in asking the right questions to the right sources with finding the answers being the easier part in the age of Internet. It is the ability to ask the right questions that enables faster and better learning.
It is due to the above Career Curveball that I today have a good understanding of BFSI, Telecom and Retail (I learnt that at iGate by dealing with CPW and Ladbrokes) domains. The human mind is amazing, the more you know the easier it becomes to learn something new.
So my message to you is to to take up that BIG opportunity even if you have some doubt in your mind and put in your heart into succeeding at it. You will learn a lot and it will help you in more than one way over the short and long run. Running away at the first sign of problems is never a recipe for success in any walk of life. Face the unknown in your career with courage as advised in the Psalm of life.

Implementing Governance, Risk Management and Compliance

This is my second article on this topic. I would recommend reading the previous article to understand GRC in its full context before reading this one.
Implementing GRC in an enterprise starts with asking the following questions at increasing levels of granularity.
  1. What are the stages of the business life-cycle for this enterprise
  2. For each stage of the business life-cycle
  • How is value generated at that stage of the business life-cycle for various stakeholders?
  • Which stakeholders play what roles in that stage and what are their stakeholder governance concerns during that stage. How do managers stay informed and in control, how do they engage stakeholders and how do they maintain end-to-end audit trail while addressing the stakeholder governance concerns during that stage?
  • What are the potential risk sources and risks to the business value generated at that stage of the business life-cycle? What the ways of managing these risks? Which stakeholders play what roles in risk management and what are their governance concerns during that stage of the business life cycle? How do managers stay informed and in control, how do they engage stakeholders and how do they maintain end-to-end audit trail while addressing the stakeholder governance concerns during that stage?
  • What are the internal and external policies, regulations and guidelines applicable at this stage of the business life cycle? What are the actions needed to comply and record compliance? Which stakeholders will play what roles in compliance in that stage and what are their governance concerns during this stage of the business life cycle? How do managers stay informed and in control, how do they engage stakeholders and how do they maintain end-to-end audit trail while addressing the stakeholder governance concerns during that stage?
The enterprise will have a strategy, processes, technology and people suitable to its its vision and mission. Using the answers to the above questions, GRC implementation is the process of instrumenting this strategy, processes, technology and people of the enterprise so that managers can stay informed about relevant information at each stage of business life cycle, engage stakeholders as needed to understand and address their concerns through the structured stakeholder governance and maintain end-to-end audit trail. GRC implementation may involve elements of technology, but is almost always never fully automatable.
In commercial insurance, the high level business life-cycle starts with submission management, which involves stakeholder interest checks for insureds, risk measurement process elements and compliance checks for insureds initially followed by governed (i.e. based on board approved policies and guidelines which are in compliance with stakeholder perspectives as well as external regulations and guidelines) risk selection, pricing (and potentially risk management), policy issue and reserving decisions which in turn trigger revenue, claims and risk management processes during the life of the insured insurance policy. The revenue recognition processes, the claims management processes and the risk management processes also operate based on board approved policies and guidelines and I will not detail them here. The process by which board approved policies and guidelines are applied (with to-and-fro information and decision flows from the point of underwriting to the board and its delegates) and the process by which these policies and guidelines are evolved to address the concerns of multiple stakeholders of the insurance enterprise in real-time or otherwise are the subject matter of GRC. If GRC has been implemented properly, the insurance enterprise is constantly tracking the risk sources in its business environment and adapting the enterprise to manage the risks from them through the information and decision flows referrred before.
The key variables of GRC implementations are the number of control points in the business life-cycle, the frequency (of the information-decision flows from the control point through the GRC process to managers representing the stakeholders and managing the engagement and governance process with them related to the information-decision flows), the degree of automation of these control points and information-decision flows, and the granularity of the control points and information-decision flows. The number and degree of delegates (and duration/comprehensiveness of delegation) of the stakeholders between the control points and the actual stakeholders is also an important variable which also influences the quality of GRC.
Similar to my description in this article, enteprises are "systems" designed to operate in particular scenarios. GRC processes are customised to expected scenarios. It is important to continuously monitor the data coming through the control points to see whether the enterprise (strategy, processes, technology, people) needs to be redesigned to fit the evolving business scenarios if they have changed a lot. These could be positive scenarios where there are opportunities in the environment to launch new product/services, diversity, forward/backward integrate else they might be negative scenarios where organization needs to rework its core business proposition to survive or choose to wind-up. In such cases, the control points and GRC process may needs to change. Continuous scanning of the environment is something that has been occuring in unstructured manner till now. With GRC there is now a mechanism of doing this better.
If you are an insurer in UK and would like to discuss implementing GRC for your organization with me, please write to prataptambay@hotmail.com

Data driven competitive advantage: Implications for consumers and enterprises

The industrial age started with competitive advantage deriving primarily from the advantages in manufacturing products/services cheaply and with good quality and selling them to consumers (usually in a vertically and horizontally integrated single enterprise). It evolved as the enterprise split away into value chains of smaller specialized enterprises to reduce cost and improve quality while collaborating to deliver the end product to the consumer. The next step was flexible high-quality manufacturing and value chain management to introduce product/services customized to various customer segments and their fickle demand as well as management of this demand through marketing and advertising which too had evolved over the previous steps. Now we have reached the next stage of this logical evolution. Now the information about customer needs and wants as they evolve is the biggest asset.
Consumers are voluntarily sharing information about themselves with various enterprises, who then (may optionally) share this information with other enterprises. We all have signed such contracts on the Internet or on paper. Sometimes such information is involuntarily provided consciously/unconsciously by consumers to enterprises (and through them to other enterprises). Sometimes such information is plain stolen and utilized by enterprises for profit making purposes. Management of this information stream, its analysis and strategic decision-making based on such analysis are driving actions across entire value chains in most industries. Essentially the primary drivers of competitive advantage in this new new world are
The ability to derive non-linear profits through leverage of high-end computing resources to gather, analyze data (about people and machines) to drive actions through the value chain is visible in multiple enterprises...Amazon and Netflix being the poster boys, but these practices are spreading through all industries and enterprises.
No human currently alive knows for sure what and how much information other humans/enteprises are now holding about him/her. One does not even keep track of how much rights over one's own information one has signed away to other humans/enterprises. Control over inferences possible from discrete information pieces is even more difficult to track. Such information and inferences are used to influence subsequent decisions. Some of my friends have expressed how our decisions and actions are being influenced day-in and day-out raising doubts about whether we are free in any meaningful manner. It is due to this that data protection regulations are becoming stricter to protect consumers, their data/information and through it - their freedom.
Data is the primary source of competitive advantage in the Internet economy. Protecting it properly is important for survival and growth. The expenses and ceremonies around data can only rise at the current moment. All industries (including offshoring and outsourcing industries) need to evolve rapidly to remain viable in face of these expenses and ceremonies.

Non and multi model reality, decisions, failures

I read an interesting article today about the "The problem with risk models". It triggered a variety of thoughts in me.
Humans experience the world through conceptual models of it. The Buddha is said to have described how to experience the world as-it-is (i.e. without models I assume). But I don't even understand what that means. So for practical purposes, human experience is all about models of the world. Science and Mathematics both build and use models (structures/frameworks). Science builds theories (models) to explain the world and scientific revolutions (a la Thomas Kuhn) update these models towards higher accuracy. The models to explain different aspects of human experience are different from each other and are essentially "local" to that area of experience. There is no one theory/model explaining how to bake the best bread as well as explain gravitation in the universe. This is the nature of the game. Humans build models based on experiences determined by location and time.
Human actions depend on the mental models by which humans evaluate their options and choose among them. These mental models are based on the individuals experience or his learning from the experience of others (past/present) and contain multiple assumptions (not all validated/validatable). A common feature among them is that they are biased by human experience.But reality is not constrained by the presence or absence of a human to experience it and there is always enough reality never experienced by any one human or those that he/she has had occassion to learn from. Essentially the models are based on limited experience and are speculations that they are representative enough for that scale, time and type of experience. Essentially models have gaps by default. They are useful ways of dealing with reality, but one needs to stay aware of the potential realities that the model does not help in thinking about, since there almost always are such potential realities. Essentially one needs to understand the limitations of one's models at all times, else one tends to fool oneself far too easily with dire consequences.
Now let us consider using a model. Most models are useful "most of the time". It is the remaining time where the model is not useful, but the case in question has high impact that really matters. Not knowing the "region" of experience in which a particular model is useful as different from the "region" of experience where it is not, is a key part of knowing the model and using it correctly. Using multiple models to manage a complex area of experience where one model is better for one area and not for other is sensible, but requires more skill and care. Till knowledge advances and one model applicable to the entire complex area of experience becomes available, such multi-model approaches are unavoidable. Oversimplifying reality assumed by choosing just one model is risky.
Now consider models driving actions of humans. Actions of one human triggers same/different actions by different humans. Sometimes such action chains are consciously designed (and constitute "systems" in a loose sense) and may have feedback loops too triggering same/different actions by humans who originated some of the actions. Such "systems" cause correlations between actions of different humans (some of which may be related to the reasons explored in the article referrred above). Most of the time feedback loops in such "systems" sort themselves out, but sometimes they do not causing "system failures". There are real correlations and spurious correlations between actions of various humans and only some "systems" really exist. Different "systems" of humanity have different drivers for the correlations between constituents. But most of the time these "systems" are consciously or unconsciously designed to work "most-of-the-time" and little attention is paid to identifying and preventing failure scenarios. Humanity is still learning to understand how to identify and prevent potential failures in time in most of its key systems.
Previously most public systems evolved and were hardly designed. Now many public systems are designed and their evolution is governed consciously. But humanity's understanding of the process of design and management of such systems is in its infancy. As Internet pervades more and more living and non-living things on this earth, the number of drivers of correlations between human actions is set to increase due to increasing number of conscious or unconsciously designed human "systems". As this happens, unless humanity learns to identify, understand and manage these "systems" better, we are sure to have multiple crisis which we do not understand. We increasingly inhabit a world which is becoming more and more complex and unless our understanding of it evolves with newer concepts and tools to identify, understand and manage "systems" at social and global level, we are set to undergo a period of unprecendented turmoil.
I talk about some of these issues in my other article but unfortunately there is little consciousness among humans of the changed nature of our common existence as humanity continues in its reverie. But I contend that this reverie will soon be broken and a frantic struggle to make sense of an increasingly complex world will start soon.
Pratap Tambay
2nd August 2014

IT Services and Enterprise Risk Management

IT services instrument nearly all operations of many enterprises in in-sourced or outsourced manners. They connect the people and processes to operate the enterprise. Those who provide IT services (typically IT department) understand the (real-life and potential) end-to-end business scenarios (that the enterprise faces and is equipped to handle) better than most other enterprise staff. Yet in most enterprises, risk management is a function much seperated from the IT department. This seems to be a legacy of a silo-based risk management approach (i.e. focused on isolated portions of the business) rather than the modern approach of taking an enterprise-wide view of risk. As managing enterprise wide risk in an increasingly dynamic business environment becomes more and more important for sustenance and growth of enterprises, this legacy is no longer appropriate. As the business environment becomes more and more volatile, loose integration between the enterprise risk management department and IT department means less flexible, less granular and less frequent measurement and management of enterprise risk. Due to this, IMHO a tighter integration between enterprise risk management department and IT department will maximize business value.
1. The recent events in the airline industry where commercial flights (MH370, MH17 and AH5017) had accidents in regions of extreme events due to war-like situations and atmosphere indicating the need for commercial airlines to integrate enterprise risk management into their operational decision making processes and systems. But it is clear that risks due to geological events (e.g.Icelandic volcanic cloud), Terrorism (9-11) are also risks that need to be monitored closely and in near real-time. The world is a much more dynamic place increasingly. The advent of newer technologies like drones makes the world more and more dynamic and risky. There is no alternative to managing risk more aggresively than before.
2. Business agility is a gospel which has been preached for some time now as the need for the hour as a response to the dynamism of the business environment. Yet the priorities of outsourcing and offshoring continue to be centred around saving money with little attention to business continuity and resilience. Most businesses are not organized to respond quickly to risks of various kinds (Climate Change, Techology). Their people, technologies, processes are mired in the old less-dynamic and risky world.
3. Managing the increased risk level at the level of making provisions (and pricing them into the sale prices) and taking actions to eliminate, reduce, transfer risks to ensure longer term sustainability of the business needs to be baked into the enterprise, people, IT and business process architectures.
It is no longer ok for IT services to be divorced from risk management. Enterprises and their IT service providers need to integrate enterprise risk management into the core of their activities in a much more real-time manner than before.
This is increasingly mandatory for survival and growth in the brave new world.
Regards
Pratap Tambay
prataptambay@hotmail.com

What are the lead indicators to project failure?

Project failure means not meeting planned schedule, effort and quality goals. This definition is important because unless a project plan (including RAID) has been reviewed, approved and baselined for feasibility, it is not a good basis to evaluate failure against. The process of review, approval and baselining the project plan (including RAID) also includes reviews of estimates including contingencies. It is important to have done all that is possible based on the information available to ensure that the plan is sound and grounded in reality.
  • Most of the time it is possible to plan in detail only for a portion of the entire project and the rest of the plan may be sketchy till initial portion of plan generates the information to plan subsequent phases. This is the nature of the game. One must plan what one can and generate RAID items for the rest. As the project progresses, the RAID items get resolved and the rest of the plan fructifies. In such cases, one tries to design the project to generate maximum information at the earliest (POC et al) and agrees commercials which take less risk till this information is generated. Only after this information is generated, bigger committments are made. Even in such cases, the committments may include risk margins and vary in terms of commercial models.
  • Sometime despite every effort by PM, the quality of information inputs to planning may be of such poor quality that the project is doomed to fail from the beginning itself. We will ignore this case, since in real life one never commits to execute the project till adequate information of adequate quality is available to make a good quality plan to generate the business outcomes and benefits needed by customer.
Once one has done all the above, has a good plan and one has started executing it, what are the lead indicators to failure? Before discussing the lead indicators, let me make a few points which determine the high level context of these lead indicators.
  1. Every project will have a critical path determining its duration and other paths will have some tolerance. Similarly the resources for the project will have some availability amounting to base targetted effort and some tolerance for the availability (i.e. contingency). Similarly quality goals for individual deliverables will have some tolerance to meet overall quality goals for project.
  2. As each week passes, the project tracking process tracks each planned task to completion in terms of duration, effort and quality. Every week this results in a project level measure of duration, effort and quality and a measure of remaining duration, remaining effort, overall gap with respect to target quality.
  3. Every week project manager and steering board monitors whether the remaining scope of work can be delivered in the remaining effort such that the overall quality target can be met. This involves looking at possibility of speeding up tasks on critical path by deploying more resources. It involves looking at possibility of reducing effort by leveraging automation or better design requiring lower effort or descoping non-critical items in consultation with customer. It involves improving quality towards target quality by deploying more resources to fix and test defects. Sometimes the plan may change without milestones changing in which case it need not get communicated to client. Sometimes intermediate non-critical milestone dates may change to maintain critical subsequent milestone dates. The latter may need discussion with client. I have generally found customers quite understanding in such cases as long as your explanations are sound and based on reality.
  4. Moving in the above manner one completes the project to meet schedule, effort and quality goals or else sometimes it is neccessary to change critical milestone dates (or split scope and leave some work for later milestone date), increase cost or reduce quality criteria for acceptance. While these may look like failure, depending on the context, it may not be viewed as failure and a lot depends on how it is presented. IMHO true failure is when the project is canned and/or overshoots client budget and/or significantly missed internal GM target.
Given the above flow of project life cycles, I think the most important variables in the project are schedule variance, effort variance, quality variance - at task levels and at various aggregations thereof. The root causes for these can vary a lot, but taken together, if the trend and outlook of these variances is increasing, then something is wrong and needs investigation to track back to planned goals. If the trend and outlook of the variances is increasing and investigation reveals that there is no easy way to recover to planned goals, failure is probable, but one needs to be careful in making the prediction based on sound analysis as described below.
From experience, schedule and effort variance caused by wrong estimations are easier to recover in commercial IT projects. It mostly requires deploying more and skilled resources respectively. If this is possible, more PM's and delivery heads do this or else negotiate with customer for delay (essentially declare failure). However in my experience, quality variance is much more nuanced to figure and its influence on the other two variances is very also significant for drawing right conclusions.
  • If quality variance at a particular stage is high, most of the time it is possible to spend more effort to reduce the variance and depending on the skill of the resource deployed the impact on effort and schedule variance will vary. Essentially the number of defects in each stage determines the impact on effort (and potentially schedule)
  • However if the defect trend and outlook in the life-cycle is such that the number of defects increases with each stage of the life cycle, then it indicates something deeply wrong at some earlier stage. Sometimes it is possible to quickly identify the root cause in the earlier stage and complete the rework for subsequent stages till the current stages without significantly impacting effort and schedule. At other times, this is not possible and failure is the right prediction. Sometimes it is not possible to identify a root cause in an earlier stage and in such case, projecting the additional effort to meet quality targets might trigger the failure prediction. Of course depending on the commercial construct underlying the delivery, the method of dealing with the failure may vary.
So in essence, simply stated if the trend and outlook of schedule, effort and quality variances accross stages of the project are high and non-decreasing, project failure is the most likely outcome. But this prediction needs to be made carefully depending on whether there is/are a deeper cause(s) inferred.

Is this future already here?

I read somewhere that the future is some measure is already present right now. It seems like a meaningless tall claim, but it is actually a deep insight into the nature of reality. For every potential future, some element of here-and-now reality contains its seed. In that sense, the future is already here. Let me describe one critical element of what is happening
1. Internet is connecting humans and things to each other.
2. Systems spanning multiple nodes (user-layer-nodes, business-layer-nodes, data-layer-nodes and the cloud-layer-nodes that each of these can access) process data originally sourced from multiple nodes of the above Internet through online/offline interfaces of various kinds to make/support decisions and/or actions at multiple nodes of the Internet.
3. Security, Availability, Scalability and various other attributes of above systems are dependent on footprint/nature of the nodes and interconnections in the sub-network as well as the interconnections of the sub-network with the wider Internet in addition to human factors
4. The diversity of technologies in the Internet gives different levels of control and vulnerabilities to different nodes and the distribution of these levels changes with time.
5. Policing/monitoring the Internet is practically impossible in any substantial sense due to which no one can be sure whether the current assumptions of humanities ways of life and work(systems of ownership of money and property in particular) are at threat.
6. Humanity made one mess by moving away from a gold standard of money to paper money and this was exacerbated by digitizing money and property ownership. Using physical power or forgery/fraud to usurp money/property did not die out with digitization. Usurping money/property by using computing and/or physical power and/or trojan horse fraud to break digital security remains possible.
7. Trusting ANY expert that a particular system for money/property is safe has become difficult at any one time. And trusting that even if a system is safe for now, that it will remain so has become difficult. No one really knows who can do what at any one given time.
8. My wife formally studied Trust at Birkbeck College, London university as part of her M.Res. (Management Course). Discussing with her, I have become aware of various types of trust. Based on my study of Anthropology, Computing and Finance, I am sharply aware of how trust shapes human relationships, social-life, computer systems and financial institutions like banks.
9. We all know how loss of trust in banks cause runs on bank and how sometimes this can exacerbate into loss of trust in the financial systems. We all know how loss of trust causes problems in human relationships and social life.
10. As financial institutions, personal relationships and social lives digitize and move online (driven by mutlple IT/BPO players like my own employer) to derive competitive advantage, we learn to trust systems with all our secrets and wealth. Many people are doing so much online that they hardly use Pen and Paper and their handwriting is becoming poor. Many people are putting their photos, blog-posts, documents online, hardly keeping physical copies. Many people are keeping their money/stocks/land-registry online with/without choice. All of this is intermediated by trust in systems in general.
11. What will happen if humanity loses trust on systems due to various reasons alluded above? Are we capable of going back without problems to a non-digital way of living and working at all. Does humanity have a backup? Given the kind of technologies easily available to create forgeries/fraud, physical means of securing money/property may no longer work. Will physical/technology power be the only way of protecting our money/property as a backup if humanity loses trust in systems?
12. I contend that the Internet is like the financial system and it is possible to lose trust in it. We have not had any widespread crisis of this type yet. But I am sure that this is possible and we will have multiple such crisis in the future.
13. In the final analysis, it is not a good idea to put too much trust in systems. It is desirable to retain the ability to live and work in non-digital means where the trust for social lives and systems of property are maintained in systems of human trust, else we risk total breakdown of humanities ways of living and working if systems go amok. I have described some ways in which systems can go amok on my blog and my experience in IT tells me that humanities ability to engineer and manage large scale and scope systems is in its infancy.
14. If systems go amok and most of humanities ways of life and work are impacted, humanity will suffer massively as I have indicated in my blog. I experience this everyday when I walk through the mess of multiple toys left by my twin daughters in our living room. I am never sure if touching something will make noise, cause motion or trigger some speech/song or some light show. Life in a mess of systems which have changed the cause-effect relationship on earth will not be different in essence.
In some measure, this possible future may already be here. Is it? What do you think?