Monday, October 20, 2014

Conference: Computer Society of India's National Conference on Formal Methods in Software Engineering


Turns out I'm just as comfortable with geeks, as I'm comfortable talking to entrepreneurs :) The 15th and 17th of this month witnessed a conference on Formal Methods in various sectors of software programming. This was the first workshop the Computer Society of India conducted in India, on the formal methods; at IISc Bangalore (some more info about the institute here). 

The stately IISc main building
The lush green campus at the IISc

The Computer Society of India was established in 1965. The Bangalore chapter is the 7th in India. It has 2700 members, 72 chapters and 600+ student chapters.

After the inauguration ceremony...




...we were introduced to the problems researchers face when they try to introduce Formal Methods to the industry. The only thing the industry is interested in, is in profits and product-performance. So if you try to introduce formal methods anywhere, you'll face resistance because of its complexity and time-consuming nature. How do you feed it to the industry then? You sugar-coat it.


Topics


General info on Formal Methods
By Dr.Yogananda Jeppu (Moog India)



Dijkstra said "Program testing can be used to show the presence of bugs, but never to show their absence!". Being 99% safe is not sufficient for a safety-critical system. Ideally, programmers shouldn't waste their time debugging; they shouldn't introduce a bug in the first place.
The very first build that Dr.Yogananda's team did on the LCA's code, they found a divide-by-zero error. He says it happens often in flight-critical code. It has even caused a ship to get stranded at sea. The whole control system failed and they had to lug the ship with another ship. Imagine what would have happened if it was war-time.

So why do bugs exist and what can be done about it?
It happens because of the language we write requirements in - "English". English sentences can be interpreted in many ways. People misunderstand requirements simply by assuming things. This is where formal methods come in.  

We require something more than English to define our system requirements. It's defining a complex system as a mathematical model broken down into little requirements where every requirement has a mathematical equation. Once you define that, it becomes easy for us to prove (using provers) that the system behaviour is right.

Formal methods aren't new. There were papers written on it in 1969 and 1975. There is a set of formal specifications where you write the requirements in a modelling language (perhaps a Z-language). The engineer works with a set of theorems. Eg: one of the theorems could be that you turn on a switch, electricity flows and a bulb should glow. But when you do the theorem proving, you'll find out that the bulb is fused and the light won't turn on. That's when you realize that it's something you missed mentioning in your theorem condition. Such proofs will come up when you try to prove conditions in a formal notation.

So we write an equation, run it and re-define the system behaviour. That's how formal methods are used. After that it's just a question of writing this mathematical equation in C code, compiling and running it.
  • Specify your system
  • Verify your system
  • Generate your code
  • And you're ready to fly! (it's not so easy though)


Non-acceptance of formal methods and semi-formalizing it

When testing is too difficult and if formal methods are not accepted in the industry due to complexity or a time-crunch, a lighter version of formal methods can be used - maybe just assertions, just a mathematical equation or for some part of the system you could use formal methods. For the Saras aircraft, they did mode transitions in a semi-formal way. Airbus is using it for low-level testing to check for the highest value of the integer or float or a worst case timing analysis. You can qualify the tool for checking such parameters.

Given that the DO-178C certification is necessary for avionics, there's a formal methods component released in Dec 2011 which can be used to comply with the certification requirements. Expressing requirements in the form of mathematics forces the person to ask questions and clarify the requirements.




Deductive verification of C programs
By Dr.Kalyan Krishnamani (NVIDIA)



Wikipedia defines deductive verification as "...generating from the system and its specifications (and possibly other annotations) a collection of mathematical proof obligations, the truth of which imply conformance of the system to its specification, and discharging these obligations using either interactive theorem provers (such as HOL, ACL2, Isabelle, or Coq), automatic theorem provers, or satisfiability modulo theories (SMT) solvers".

Deductive verification can be performed using Frama-C. It has a bunch of plugins for value analysis (what values the variables of a program can take to compute), coverage and other method plugins. You can write your own plugins too (like one to detect divide by zero). Plugins are written in the OCaml language (it's easy to learn, but writing optimal code in it is an art).

Why verification?

In 1996, the Ariane 5 rocket crashed because they used the same software as the previous generation rocket (they didn't bother to verify it), and while trying to convert a 64 bit int into a 16 bit int, the software shutdown due to an overflow error. The rocket had a backup software, in case the first software shutdown because of an error, but the backup software was the same thing, so that also shutdown and the rocket disintegrated (that's what rockets are programmed to do if something goes wrong).
That was long ago, but even today, verification methods are needed. More so, because software complexity has increased. Airbus and Dassault aviation use Frama-C (Framework for Modular Analysis of C programs) very often.

What it does: Frama-C gathers several static analysis techniques in a single collaborative framework. The collaborative approach of Frama-C allows static analyzers to build upon the results already computed by other analyzers in the framework. 

You can verify functions standalone. You don't need a main() function. It uses CIL (C Intermediate Language). It also uses ACSL (ANSI C Specification Language) as the front-end.

This is how verification would be done with ACSL, for a function that returns the maximum value of two values (the code in comments is the verification code. Note how it checks to ensure that the return value should be either x or y and no other number. Would you have checked for this condition if you wrote a test case?):

/*@ ensures \result >= x && \result >= y;
    
ensures \result == x || \result == y;
*/

int max (int x, int y) { 
return (x > y) ? x : y; }
Once a programmer defines such conditions, you don't need to do any documentation. The ASCL notation itself becomes the document/requirement specification for the next person who comes across the code. There's no miscommunication either, since it's mathematically expressed.
Through ASCL, you can use annotations, named behaviours, allocations, freeable, ghost variables, invariants, variant and inductive predicates, lemma, axiomatic definitions, higher-order logic constructions, lambdas and more.
Frama-C can also be used for regular testing, using plugins.


The architecture

As shown in the image below, the tool Krakatoa (which processes Java programs through the Why 2.30 tool) is the predecessor to Frama-C for Java (KML (Krakatoa Modelling Language) is the language used for writing the cide properties). These properties are fed into the Why tool and Why converts these properties into an intermediate language, Jessie (although now-a-days, a tool called WB is used instead of Jessie). Why then works on what Jessie generates and then Why generates Verification Conditions (VC). It's a bunch of assertions like mathematical formulae which will be translated to the various prover modules. Encoding (rather call it 'translator') takes the verification conditions and translates it into the different input languages for the provers. So if the verification conditions are satisfied, the properties (specification) you wrote are correct.
Architecture of certification chains: Frama-C, Why, Why3 and back-end provers

  • For Frama-C, the input is a C program. 
  • Deductive verification (the technique which, through a bunch of rules generates the VC's) generates the verification condition (so you don't even need to know deductive verification to start using Frama-C. You'll need to know a bit of Jessie though, when you need to debug). 
  • Then, provers like Simplify are then used to verify it.

For arrays, structures, unions etc., Frama-C already provides the commonly used memory models. It allows you to also specify your own memory model and also allows you to download any other model and integrate it with your system. They needed to introduce the feature of downloading any other module, because the next version of Frama-C might support C++. 


Hoare's triple

Deductive verification research is based on the Hoare's triple. You have a set of rules. The rules will dictate how to handle a variable assignment, how to handle a loop and so on. 

This is the Hoare's triple: {P} s {Q}

Where s can be any program fragment (if statement etc.). P is the pre-condition, Q is the post condition (P and Q are the pre and post conditions that the user writes; remember ASCL above?). If P holds true, Q will hold true after the execution of s.


Other tools

While Why is an excellent tool for a modelling or verification environment, other tools like B-method (used in the automotive industry), Key (used for Java deductive verification of the Paris automated metro trains. It generates loop invariants automatically), Boogie, Dafny, Chalice (for deductive reasoning) and COMPCERT (This tool can actually replace GCC. It's a certified C compiler which caters to strict coding guidelines which are necessary for avionics softwares. Eg: variables can't be used without first initializing them) are also helpful. Frama-C is not just a Formal tool. There are also coverage tools (for unit testing), Runtime error checking, data flow analysis and plugins for concurrent programs and more.

To know which tool is best suited to what application/field, ask the researchers who work in that field.




Refinement based verification and proofs of functional correctness
By Prof. Deepak D'Souza (IISc)


Prof. Deepak introduced us to the question of "How do you prove functional correctness of software?"
If you were to verify the implementation of a queue algorithm, one way to prove it is to use pre and post conditions. The other way is to prove that the implementation refines (provides the same functionality as) a simple abstract specification.

Hoare logic based verification can prove complex properties of programs like functional correctness and thus gives us the highest level of assurance about correctness (unlike testing techniques which do not assure us of a lack of bugs). The graph below shows the level of correctness that's required and the amount of correctness that's provided by the four techniques.

In the picture above, Frama-C is what uses an abstraction refinement technique, whereas bug detection is the normal testing done with ordinary testing tools.

Hoare's logic was checked using the VCC tool. For the max() function similar to what was proved using Frama-C, the logic here is written in VCC with this kind of syntax:

int max(int x, int y, int z)
_(requires \true)
_(ensures ((\result>=x) && (\result>=y) && (\result>=z)))

Again, there's another condition to be added here, which has to ensure that the result returned should be either x, y or z, and not any other number.

The condition is then sent to an SMP (Shared MultiProcessor) solver which says whether the condition is true or not.


Overall strategy for refinement
To verify a program, you have to use both techniques. First, Hoares technique where you use refinement language to generate annotations and then use the annotations in tools like Frama-C. Both techniques have to be used for verifying a program.




An industry perspective on certifying safety critical control systems
By Dr.Yogananda Jeppu (Moog India)


This talk was about mistakes we make in safety critical applications.

What is a Safety Critical Application? When human safety depends upon the correct working of an application, it's a safety critical application. Examples: Railway signalling, nuclear controllers, fly-by-wire, automotive domain (cars have more lines of code than fighter jets), rockets, etc.

Every industry has its own standard for safety (avionics, railways etc.). But accidents still happen and safety problems can cause products to be recalled (like Toyota which had a software problem which prevented the driver from applying from the brakes).

Dr.Yogananda proceeded to show us many examples of past incidents were shown, of how aircraft crashed because of software errors that showed the plane's elevation wrongly, people being exposed to too much radiation in a medical system because of a software error (in 2001 and 2008 which shows that programmers make the same mistakes repeatedly), trains crashing into each other due to sensors erroneously reporting that the trains were not in each others path.

In hindsight, all these errors could have been caught by the one test case that was not written. Fault densities of 0.1 per 1000 lines of code are experienced in space shuttle code. In the nuclear industry, the probability of an error is 1.14*10E-6 per hour of operation (what it actually should be is 10E-9 or lower, but the industry is nowhere near that).

Dormant errors

Once, an error of a tiny number; 10E-37 was found in an aircraft software after it lay dormant for twelve years. This was due to denormal numbers; the fact that floating numbers can be represented upto 10E-45 in PC's, but in some compilers numbers smaller than 10E-37 are treated as zero. So in one of the aircraft channels, the number was 10E-37 and in the other it was 10E-38. So one channel didn't work properly and the issue came to the forefront when an actuator had issues in a certain unique situation. Note that this happened after all the flight testing happened. Errors in filters and integrator systems are common in missile systems and avionics.

So code coverage tests are very important. Also check variable bounds by running many random cases and monitor the internal variables and collect max and min values. Formal method tools are there to do this. Many times, designers don't take verifiers observations into account. They say it's not necessary and talk about deadlines. Never generate test cases just to achieve code and block coverage. They only show locations of the code that aren't tested. They don't show deficiencies.

Nothing is insignificant in safety critical applications.


Standards

You have to know what kinds of problems can happen in a system before writing code. Standards help with this. The aerospace industry has standards like the D0178B. Coding takes just about quarter the time that verification takes, but verification is very necessary. Certification is a legal artefact that has to be retained by a company for 30+ years.


Model based testing and logical blocks

You can even use Microsoft Excel based equations as a model for testing. If model and code outputs match, we infer the code is as per requirements. Instrumented and non-instrumented code output should match very well with model output. You cannot rely on a tool that generates test cases.
For safety critical application, all logical blocks have to be tested to ensure modified condition / decision coverage (MC/DC).


Orthogonal arrays

Orthogonal arrays can be used in any field to reduce the number of tests. A free software for this is called "All Pairs". MATLAB has orthogonal array generation routines. Also, ACTS tool from National Institute of standards and technology called SCADE can help with certification.

Any safety critical tool has to be qualified. It's not an easy task and you shouldn't use a software without it being qualified for use. Hire someone to qualify it and test the limitations of the tool under load.


Testing tantra
  • Automate from day one. 
  • Analyze every test case in the first build. 
  • Analyze failed cases. Do a debug to some level. 
  • Eyeball the requirements. 
  • Errors are like cockroaches. You'll always find them in the same place (cockroaches are always found behind the sink). Eg: you'll always find problems with variable initialization. 
  • It's useful to have a systems guy close-by. 
  • Have tap out points in model and code.



Race detection in Android applications
By Prof. Aditya Kanade (IISc)

This talk was about formalizing concurrency semantics of Android applications.
Algorithms to detect and categorize data races and Environment Modeling to reduce false positives.

Droid racer is a dynamic tool developed by Prof. Aditya's team, to detect data races. It runs on Android app's and monitors the execution traces to find races in that execution. They have run it on applications like Facebook, Flipkart and Twitter, and found data races. It performs systematic testing and identifies potential races in popular apps.


The Android concurrency model

Dynamic thread creation (forking), locking, unlocking, join, notify-wait.
Android has an additional concept called asynchronous constructs: Tasks queues which start asynchronous tasks. Threads are associated with task queues. Also called event driven programming. A FIFO model, where any thread can post an asynchronous task to the task queue.
Eg: While you're playing a game on your phone, an SMS might arrive or your battery may go low and the phone would give you a warning message. 
Atomic execution of asynchronous tasks takes place. There's no pre-emption. Once a thread is assigned a task, the task is run to completion.

A typical Android thread creation and destruction sequence
In the above diagram, note how the system process and application processes are separate. Each process can have its own number of threads within it. Events are enqueued and threads like "t" are created which run the tasks in the task queue.

Android can also have single threaded race conditions.
Sources of non-determinism can be:
  • Thread interleaving
  • Re-ordering of asynchronous tasks

When checking for races you have to check for:
  • Forks happening before initialization of newly forked thread
  • Thread exits before join
  • Unlock happening before a subsequent lock and more

Droid Racer algorithm

It uses a graph of vertices and edges which creates what the professor refers to as a "happens-before ordering". To check for problems, they do a depth-first-search to pick two memory locations and check if they share an edge in the graph. If there's no edge, you've found a data race.


Data races are classified into...

...multi-threaded (non-determinism caused by thread interleaving)
and
Single-threaded data races. They can be...
  • co-enabled (where high level events causing conflicting operations are un-ordered)
  • cross-posted (non-deterministic interleaving of two threads posting tasks to the same target thread)
  • delayed (at least one of the conflicting operations is due to a task with a timeout. This breaks FIFO)

These were the data races they found in some popular applications

Droid Racer is available for download from the IISc Droid Racer website.




Probabilistic programming
By Dr.Sriram Rajamani (Microsoft)
 

Probabilistic programs are nothing but C, C#, Java, lisp (or any other language) programs with...

1. The ability to sample from a distribution
2. The ability to condition values of variables through observations

The goal of a probabilistic program is to succinctly specify a probability distribution. For example, how are chess players skills modelled in a program?

The program looks like this:

float skillA, skillB, skillC;
float perfA1, perfB1, perfB2,
perfC2, perfA3, perfC3;


skillA = Gaussian(100, 10);
skillB = Gaussian(100, 10);
skillC = Gaussian(100, 10);


// first game: A vs B, A won
perfA1 = Gaussian(skillA, 15);
perfB1 = Gaussian(skillB, 15);
observe(perfA1 > perfB1); 


// second game: B vs C, B won
perfB2 = Gaussian(skillA, 15);
perfC2 = Gaussian(skillB, 15);
observe(perfB2 > perfC2);


// third game: A vs C, A won
perfA3 = Gaussian(skillA, 15);
perfC3 = Gaussian(skillB, 15);
observe(perfA3 > perfC3);


return (skillA, skillB, skillC);

Skills are modelled as Gaussian functions. If player A beats players B and C, A is given a high probability. But if player A had beaten player B and then B beats A in the next game, we assume a stochastic relation between the probability and A's skill (which means that the program will be unsure and assign a probability which will not be too high on the confidence of whether A's skill is really good. Pretty much like how our brain thinks). Now if A beats C, then A gets a better rating but still there's a lot of flexibility in whether A is really better than B or not.

Ever wondered how scientists make predictions about the population of animals in a forest or calculate the probability that you'll be affected with a certain disease etc.? The Lotka Volterra population model is used here. It can take into account multiple variables for calculation and prediction. Well known models are that of the calculation of the population of goats and tigers in a forest, calculation of the possibility of kidney disease based on age, gender etc., hidden Markov models for speech recognition, Kalman filters for computer vision, Markov random fields for image processing, carbon modelling, evolutionary genetics, quantitative information flow and inference attacks (security).

Inference

Inference is program analysis. It's a bit too complex to put on this blog post, so am covering this only briefly.

The rest of the talk was about Formal semantics for probabilistic programs, inferring the distribution specified by a probabilistic program. Generating samples to test a machine learning algorithm, calculating the expected value of a function with respect to the distribution specified by the program, calculating the mode of distribution specified by the program.

Inference is done using concepts like program slicing, using a pre-transformer to avoid rejections during sampling and data flow analysis.




Statistical model checking
By Prof. Madhavan Mukund (CMI)

Model checking is an approach to guarantee the correctness of a system.

The professor spoke about stochastic systems where the next states depend probabilistically on current state and past history. Markov property and probabilistic transition functions.

Techniques for modelling errors
  • Randomization: Breaking symmetry in distribution algorithms
  • Uncertainty: Environmental interferences, imprecise sensors
  • Quantitative properties: Performance, quality of service
Two major domains of applications are in cyber physical systems (auto-pilot, anti-braking) and biological systems such as signalling pathways.


A simple communication protocol analysed as a Discrete Time Markov Chain

Consider a finite state model where you're trying to send a message across a channel and it either fails or succeeds.
The matrix P shows the probability of going from state to state in a finite state model. Every row adds to 1. The labels {succ} (meaning 'success') are label states with atomic propositions.

We're interested in:
  • Path based properties: What's the probability of requiring more than 10 retries?
  • Transient properties: What's the probability of being in state S, after 16 steps?
  • Expectation: What's the average number of retries required?
This talk focussed on task based properties.
Each set of paths can be measured. The probability of a finite path is got by multiplying the probabilities
A single infinite path has a probability of zero.

This way you can model and measure paths that fail immediately, paths with at least one failure, paths with no waiting and at most two failures etc.

Disadvantage of the stochastic method
  • Probabilities have to be explicitly computed to examine all states
  • You have to manipulate the entire reachable state space
  • It's not feasible for practical applications
The alternative - Statistical Approach

This approach simulates the system repeatedly using the underlying probabilities. For each run, we check if it satisfies the property. If c of N runs are successful, we estimate the probability of the property as c/N. As N tends to infinity, c/N converges to the true value.

The statistical approach helps, because simulation is easier than exhaustively exploring state space. It's easy to parallelize too. You can have independent simulations. The problem is that properties must be bounded (every simulation should operate within a finite amount of time), there can be a many simulations and the guarantees are not exact.


Monte Carlo model checking

This is performed using a Bounded Linear Time Logic as input and assuming a probabilistic system D. A bound t is computed, the system is simulated N times, each simulation being bounded by t steps and c/N is reported, where c is the number of good runs.


Hypothesis testing

In this technique, we define the problem as P(h) >= θ, call this as our hypothesis H, fix the number of simulations N and a threshold c.
Also have a converse hypothesis K where P(h) < θ.
After the simulation if more than c simulations succeed, accept H. Else accept K.

The possibility of errors are either a false negative (accept K when H holds and the error is bounded by α) or a false positive (accept H when K holds and the error is bounded by β).


This is the extensive sampling method which is not very practical. A better way to do it is shown in the graph below - by considering values from p1 to p0. It's the probability of accepting hypothesis H as a function of P with an indifference region.

  

 Other techniques involved are the single sampling plan, adaptive sampling, sequential probability ratio test based model checking, boolean combinations and nested properties.

Challenges for statistical model checking
  • If the probability of an event is very low, we need more samples for a meaningful estimate
  • Incorporating non-determinism is difficult because each action determines a separate probability distribution (as in the Markov Decision Process)
Tools you can use for statistical model checking are Plasma and Uppaal.




Formal methods and web security
By Prof. R.K.Shyamsundar (TIFR)


Information flow control is an important foundation to building secure distributed systems. I asked the professor about this, and he said that to implement this, you need an operating system which is capable of categorizing information packets as secure/trusted/untrusted etc.


Multi Level Security

Sensitive information is disclosed only to authorized persons. There are levels of classification and levels form a partially ordered set.
In an information system, the challenge is to build systems that are secure even in the presence of malicious programs.
There are two aspects to this: Access control and correct labelling of information packets (determining the sensitivity of information entering and leaving the system).


The Bell LaPadula Model

In this model, entities are files, users and programs.
Objects are passive containers for information.
Subjects are active entities like users and processes.

Untrusted subjects are assigned a label. They're permitted to only read from objects of a lower or equal security level.
Trusted processes are exempted from the requirements of the process.


Information Flow Control

This is mandatory for security. Confidentiality and integrity are information flow properties. Security is built into the design and you need to provide an end to end security guarantee.
Labels are assigned to subjects and objects. Read-write access rules are defined in terms of can-flow-to relation on labels.


Lattice model

A lattice is a finite ordered set with a least upper bound and lower bound. Using it, you can define reflexive, aggregation properties etc. It's suitable for any system where all classes need to be totally ordered.


The Biba model

Information can flow from a higher integrity class to a lower integrity class. It restricts a subject from writing to a more trusted object. La Padula is for information confidentiality (prevention of leakage of data).
Biba emphasizes information integrity (prevention of corruption of data).


Protecting privacy in a distributed network

This involves a de-centralized label model. Labels control information flow.

Trusted execution model: Enforcing data security policy for untrusted code. Eg: How do you know if your secure information isn't being transmitted outside by the antivirus program you installed?

In 1985, the Orange Book defined trusted computer system evaluation strategy.
The flaw of older systems were that they considered confidentiality and integrity to be un-related. Downgrading being un-related to information flow control was also a flaw.

Readers-writers labels: Uses the RWFM label format.

State of an information system: It's defined by subjects, objects and labels.
Downgrading does not impact ownership, but you are giving read access to others. It's easy to verify/validate required security properties.

Application security protocols: There are authentication protocols and security protocols for this. Eg: The Needham Schroeder Public Key Protocol implemented in Kerberos.
Also have a look at Lowe's attack on NSP.


Web security

For cross-origin requests (Eg: a computer accessing Gmail and youtube at the same time), one of the major problems is in inaccurately identifying the stakeholders.




Architectural semantics of AADL using Event-B
By Prof. Meenakshi D'Souza (IIIT-B)



The Architecture Analysis and Design Language is used to specify the architecture of embedded software. This is different from a design language. It speaks of components and how they interact.

Consider the cruise control system of a car:

Abstract AADL model
On refining it further, you get this:


AADL allows modular specification of large-scale architectures.

The architecture specifies only interactions. Not algorithms. AADL gives mode/state transition specification system. It allows you to specify packages, components, sub-components etc.

  • AADL lets you model application software (threads, processes, data, subprograms, systems), platform/hardware (processor, memory, bus, device, virtual processor, virtual bus)
  • Communication across components (data and event ports, synchronous call/return, shared access, end-to-end flows)
  • Modes and mode transitions
  • Incremental modelling of large-scale systems (package, sub-components, arrays of components and connections)
  • There's an Eclipse-based modelling framework, annexes for fault modelling, interactions, behaviour modelling etc.


Will you be able to use a formal method to check the property templates of AADL?
You have to. Because it has to be automated. AADL can't check an end-to-end latency on its own.
A formal method is needed. Same for bandwidth of the bus (in the diagram).


Event B 

It's a formal modelling notation based on first order logic and set theory. Refinements in Event-B are formalized.
Dynamic behaviours are specified as events.
The Rodin toolset has a decomposition plugin which breaks down a machine into smaller machines/components.

The number of proof obligations is huge in Event B for just a small set of AADL flows.

This is what an Event B program would look like:



This is how AADL is converted to Event B


Proving of architectural properties:
First, an Event B model corresponding to AADL model of the system is arrived at with manual translation.
The AADL model includes the system, devices, processes, threads, modes, buses, timing properties for thread dispatch, transmission bound on buses.
Global variables model the time and timing properties of threads in Event B.
End to end latency requirement of AADL is specified using a set of invariants of the overall model.
Correct latency requirements are discharged through 123 proof obligations, few of which are proved through interactive theorem proving.


Conclusion

It's possible to provide semantics of architectural components in AADL using Event-B. Such semantics will help designers to formally prove requirements relating to architecture semi-automatically. Professor Meenakshi's team is currently implementing a translator that will automatically generate and decompose incremental Event B models from incremental AADL models.


Thursday, October 16, 2014

Conference: Investor Relations & ESOPs For Startups

Held on 11th October at IIM-B, this conference was about building long-lasting investor relationships and creating employee stock ownership plans. The session began with the guest speakers Mr.Cherian and Ms.Neela introducing ESOP's to us.



The Panel

  • M.J. Cherian (Co-Founder and CEO of Keynote ESOP)
  • R. Natarajan (Nats) (Managing Director & CFO)
  • Mr. TC Meenakshisundaram  (Founder & Managing Director of IDG Ventures India Advisors)
  • Neela Badami (Works with Samvad partners. Gold Medalist at National Academy of Legal Studies and Research (NALSAR) University of Law, Hyderabad)
  • Vivek Subramanyam  (CEO, iCreate)

Stock based compensation for startups

ESOP's are not the only option. There's also Restricted Stock Units (RSU's which are free in USA, but in India you can't give it for free because of regulations of face value), Stock Appreciation Rights (SAR) also known as Phantom stock, Employee Stock Purchase Plans (ESPP) and Restricted Stock Awards (RSA, which doesn't really work in India (like for example TCS giving shares at one rupee each)).

ESOP's can either be setup by the company board or the company can setup a trust. The trustees would basically hold shares of the company and they'd ensure that shares are issued to employees in accordance to the exercise. For listed companies, this used to result in shady dealings because the company could play around with the share price, but now SEBI has caught onto this and disallowed incorrect practices.

These options go through a lifecycle of Grant, Vesting, Exercise and Sale.


Coverage
It's very important to pick the right kind of people to reward with stock options. Pick people who value it. Stock options are given over and above the compensation you're already paying them, and the minimum benefit of it is that it motivates them to stay in the organization. Figure out what value you're giving the employee with each option you grant.

Grant objective
You basically give out stock options as a past performance reward or in anticipation of future performance or as a joining bonus.

Grant price
You can give it at fair market value, face value or at a discount to fair market value (problem is that due to volatility, people might get disappointed when they see the stock value suddenly go down, so giving it at a 70% or 40% discount is a wiser option). Costs on when it's given at fair market value is lesser and easier on the book than when it's done at face value.

Vesting
How do you earn a benefit out of it? Once you give out a stock option to an employee, there's no way to check if he/she is performing well etc. It's no use because once you give the stock option, it's done. So don't link it to the performance of the employee.
Instead, link it to the performance of the company. The problem here is if there's a lull in the industry, you have to retain employees and keep them motivated.

Exercise and sale
When it's time to exercise options, you have new partners coming in, and it's very important to protect yourself, your company and your employees while you structure your stock options. Have a clause that says if the person has to leave the organization in one year, the person gets accelerated vesting.
Having new partners can lead to deal breakers. Eg: If you're having a trade sale and all employees have to be ready to sell, it becomes a problem when some of them decide not to sell. You need to have drag-along clauses in the agreement to avoid this problem. Investors are terrified of the fact that there are shareholders over whom they don't have any control over (employees). The drag-along clause implemented has to be highlighted upfront to the employee so that there's no heart-ache later.

Being aware
When offering a stock option, the board structures the stock option. You need to have a detailed plan document which covers all terms you had in mind while you went through the entire exercise. The thought process that goes into designing each scheme is important and specific to you. It's important to know that if you took a decision, then why you didn't choose anything else.  It's important that you understand the stage of life of your company (do you see any liquidity? Top-line growth where profitability is not in the immediate future). For legal sanctity, you need an agreement where the employee says he's understood everything.  Document and communicate shareholder approvals, board approvals. Plan and document employee agreements, talk about progress and wealth. Employees should know what the annual profit of the company is. They should know the top-line, bottom-line, revenue drivers and hence the potential value. Most of the technical employees and even finance employees aren't aware of the overall financial picture of the company. It's very important for entities to talk about this and inform their employees.

In short, you just want to reward an employee with money. It's not rocket science. You let them know the expectation and they ride the wave and get what is projected to them, without them investing a single rupee. Keep the exercise period as long as possible for an unlisted entity since you don't know when you'd get liquidity, and you don't want the employees to lose their options because they didn't want to invest money at that point in time.

A note on regulations
Some regulations have changed since April this year. Keep constant track of changing regulations.
  • You always have to get prior approval from shareholders. 
  • You can only cover permanent employees. You can't cover promoters and independent directors anymore. 
  • You need a minimum of one year from grant to vesting. The employee can't sell for one year.  There's a lock-in period.
  • The minimum price cannot be zero. It has to have a face value. 
  • Get a good company secretary and get to know the compliance requirements of the new companies act, especially wrt the issuance of shares. The act has been passed in the wake of corporate scams and mistrust, so the requirements must be examined or they might become non-compliance issues which can become embarrassing when you're raising money from VC's etc at the next stage.
  • Be aware when dealing with foreign exchange. Check for any RBI compliance involved.


The panel discussion

Investor relations
Mr. Vivek began by telling us about how his story with investors began, where they took a call that the reason they're taking in investing in you to build a business, one simple way to keep the relationship good is to be completely transparent. Ensure investors get clarity on what is going good and what is going bad during challenging times too.
Mr. Natarajan also began by saying that the fundamental word for the relationship is trust. Before, during and post investment, the trust and transparency factor is very important. From the time the term sheet is signed and the deal is closed (around 40 to 60 crucial days), transparency is important because several deals may get conked off during the period. The entrepreneur has a responsibility to the investor, because it's their money coming in. Both parties should believe that they're working to a common goal.

In the USA, the founder is willing to hire a CEO and report to the CEO. Unfortunately, it hasn't yet caught on in India because the founders feel that they need to be the CEO. Here, the founder thinks that he/she needs to own the majority. In the USA, the founder sometimes only owns 10% of the company and yet goes on to make the company successful (reminded me of the first Spiderman movie where the founder of Oscorp is asked by the board to leave, and he disappointedly and angrily says "...I created this company...do you know how much I had to sacrifice?!!!"). Anyway, Mr. Natarajan feels that Indian entrepreneurs need a change of mindset for the investor-investee relationship.

Mr.TCM explain why trust is important, giving an example where in his company where a regulatory issue came up (which is a huge risk) and they called up the investor and informed them of it. After the initial shock, it turned out that informing the investor held them in very good stead. They consulted an independent entity and took a decision to complete the transaction with Mr.TCM's company. If the information was not disclosed and later discovered, the distrust it builds for the next five years is just not worth it. Nothing you say will be believed. When you're in doubt, go ahead and communicate. Over-communicating is also ok.
The investor is not the only one who has decision power just because he/she has the money. Even the entrepreneur has as much of a choice to choose which investor to deal with. But take a well informed choice. Don't reject someone just because they're asking tough questions. Sometimes people who won't ask tough questions won't guide you either. Strike a balance.

Mr. Natarajan says that just the way investors do the due diligence on entrepreneurs, the entrepreneurs should also do the same to the investors.
As for ESOP's, give it only to employees who value it. In a startup, you can't pay a market salary. You pay a salary and compensate the rest with an ESOP. So the ESOP has to hold some value and meaning for an employee to stay in the company. He/she should really trust the option too. Founders should be ok with distributing stock options to employees, but it should have some value. There's no point being a 100% owner of a zero value company. The founder have to realize that they're also stakeholders. Founders seem to think they're the owners only.

You shouldn't live with the fear that the investors will take control of the company. The focus should be on creating value. The employees need to feel they're part of the team and what they're doing increases value and enriches the company. The feel for the cause has to be really high. There are stories about how the watchman and driver in the company got a share. Those make nice stories, but reality is that you should give it to those who believe in the power of ESOP's and the sense of purpose. If the employee gets a 20 or 30% hike from some other company and goes away, the ESOP has no meaning.

Mr.TCM says that ESOP's have never been fully utilized in almost all companies.


Questions from the audience

What are the quantitative and qualitative elements which define the valuation of a company?
It's a question of what an investor believes can be created going forward, looking at the market. The theme, it's differentiation and the dynamics of competition. It's on the basis of what we can create and what would be the value later.

About vesting, if promoters move out, will the stock remain with them or will it only be for the vesting period?
For promoters also it will be under vesting. If he leaves, the un-vested part will come back to the company. The logic is if you as a CEO leave 2 years from now, we need to hire another CEO and give him stock. Where will that come from? In the US, they vest on a monthly basis. In India it's on a yearly basis.

What's the difference between shares and stock option pool. Will creating the pool be a dilution on the overall shareholding pattern?
As options are exercised, it would dilute. You don't have to alter the shareholders agreement. You can pass a shareholder resolution and go ahead with the creation of the pool.

While hiring employees, we based ESOP's on tenure. But in the presentation you showed that it also depended on the employee and company performance. So can the conditions be altered to include performance?
Yes. You can put a performance matrix which can be communicated on an annual basis. When you give a stock option, never tell that I'll give you 0.5% of the company or 3% of the company etc. You should only tell them the number of shares you're giving them. The employee can sue you if they think they owns 3% of the company and they realize that it isn't actually true. There's no such thing as somebody owns 3%. Nobody owns 3%. He just has 4000 shares. He does not hold 3% forever. Another mistake companies make is promising that at exit, the shares will be at a certain price and this is the gain that the employee will make. Never give these in writing. It'll come back to bite you at a later date as a legal obligation.

In a startup environ, people prefer to hire young managers or employees. They don't really appreciate ESOP's. Does it really make a difference to keep aside 10% of shares for them?
That's the exact segment of people for whom you shouldn't keep it aside :-)  Set it aside until you prove the leadership and the company scales. You're already paying them what they want, so you don't really need to use ESOP's. Eventually it can be used, as they understand the value of it. For some employees like sales people, it may not make sense to give stock options. He has to be earning his commission rather than thinking about the stock options. He has to earn a certain amount every month. If this is a guy who is earning Rs.15000 or Rs.20000 a month, he's not going to stay with you for that perfect tomorrow which would buy him a yacht later. He's planning to buy a bike right now for which he needs a Rs.2000 EMI every month. He'd like to buy that bike right now than be on the yacht later. Besides, the yacht is just a promise which may not happen. The entrepreneurs themselves haven't yet bought their yachts yet :-)

When you open a new company, how do you decide the number of shares to start with? When the pool gets exhausted and a new investor comes in and wants shares, where do those shares come from? How does that impact the existing shares?
Typically people keep Rs.10 as the start value and then put the number of shares. Reason is that if need be, they can split it to Rs.1. The recommendation is not to start off with Rs.1 because then there's not much leeway to go down further. If you've exhausted the pool, you can always create more of a pool. The founders and investors would contribute more as long as it creates value. If the shares are over, you have to increase the authorized share capital and there's a process for it and experts who can help with it. It will never be a problem as long as everyone agrees. Reducing share capital has to be in compliance with rules. It's a criminal offence, so employ a good company secretary.

What is the optimum pool size?
There is nothing called 'optimum'. Typically, it's 10% for a startup. When the company becomes very big, the pool can become smaller.

In a company, is there a thumb-rule to decide who gets how much of the company?
Typically it's been rather democratic because it's difficult to say who contributes how much. Investors don't determine that. It's about how co-founders find an equilibrium amongst themselves. There's no right or wrong in it. It's also good to document the intersay relationship between owners, to avoid heartache later and gives a good, professional impression to the investor too.

Is sweat equity part of the ESOP pool? How is it given it to promoters?
Sweat equity shares is separate from ESOP. As a promoter to get sweat equity, you have to value the intellectual property or product or service he gives and that's accountable as an asset in your balance sheet which can be monetized over time. You'll have to put a value to it and technically you're issuing shares in lieu of cash so you have to have an asset against which the shares are issued.

When you take a seed or equity fund, the promoters will be working for a salary much lesser than a market salary. So how is that compensated?
The promoter has to work for a salary lesser than the market salary; that's why he's a promoter :-) It's part of life. That's why there's no sweat equity. Equity is for the sweat :-)


Sunday, October 12, 2014

WhatsApp on PC

Colleagues didn't believe me when I told them they could actually use WhatsApp from a PC.

Turns out there are two options:
  • Install using BlueStacks; a mobile emulator
  • Use (or rather, don't use) Androck + WhatsApp.
AVG detects a virus in the Androck installer. I just instinctively decided to run a scan on the installer, and it turned out to be right.





Bluestacks

BlueStacks on the other hand, seems safe to use. Techrism has a tutorial on how to install and use WhatsApp with BlueStacks. Even I've tried it, and it works fine, but can be a pain as it takes a long time to install (downloads around 30MB of data during installation) and startup.
You might want to de-select App Store synchronization and enablement if you don't want to access the Google App Store from BlueStacks. If you do, then I'd recommend you create a separate GMail account for it, as you're expected to login to the Gmail account from BlueStacks, and there's no guarantee that your password is safe.

If it's stuck at this, it's probably because you have to tell your firewall to allow BlueStacks
After installation, it may ask you whether to allow Google to collect location data. I think it's best to select "Disagree".

Once installation is done you'll see WhatsApp in the Top Apps list. Otherwise, use the search option to find it. If you had enabled the App Store, you'd have to enter your login info twice and then WhatsApp will be ready to install.


Soon you'll see an "Open" button in place of the "install" button. The rest of the procedure is intuitive, where you review the terms of service, enter your country code and mobile number, they verify your phone number by sending you an sms. If the verification fails, they auto-call up and mention a code which you type on the screen. Note that the "Next" button is on the top-right corner of the screen.

Wish the WhatsApp team creates their own PC version of the application.


Saturday, October 11, 2014

Ticket to Mars!

Delta IV Heavy. Image from Wikipedia
This would probably count as one of the silly things I've done, but did it anyway just for the fun of it. I've entered my name as one of the names which gets sent to Mars in a microchip inside the multi-purpose crew vehicle on the Orion Delta IV heavy rocket.
It's first going to be part of a test flight starting at Cape Canaveral, Florida on December 4th 2014, going for two orbits around the earth, re-entering the earth at 8900 m/s, reaching temperatures of 2200 degree Celcius and landing in the Pacific Ocean near California. The microchip will be carried during the test flight and during the actual launch in late 2017.

You can register too, and get your boarding pass before Halloween (October 31st). This is the beautifully designed website.


and the boarding pass you'll get:


Be aware though, of the terms and conditions when submitting your name. I ran it through EULAlyzer, and it came up with two small cautionary sentences.


Be careful not to share sensitive personal details. Your details will be captured by www.icontact.com who are also said to be bound by the same terms and conditions that you agree to, but why give them the information in the first place? Your IP address, which browser and OS you use will also be captured.
So I decided not to give my full name.


Orion
Orion is a manned spacecraft (can carry 0 or 6 astronauts) being built by Lockheed martin. It's got a diameter of just 5m, so that's a rather tiny space for the astronauts to be in :)

Image from Wikipedia
 Although the test will be done with the Delta IV rocket, the actual launch on December 17th, 2017 will happen with the Space Launch System. A single launch costs more than Rs.30620000000. Imagine the cost in 2017.

Four astronauts would travel in the spacecraft which is designed for a maximum active crew time of 21 days. It also supports a 6 month quiescent (a state or period of inactivity or dormancy) time during which life support would be provided by the Deep Space Habitat (which allows a crew to live and work for upto a year in space).


Mangalyaan

Our own spacecraft which is already at Mars, has been taking pictures of Mars.

Image copyright: ISRO

You can view the fruits of the hard-work of our distinguished scientists who made our nation proud once again, on this website.

A project that cost Rs.4286800000, the Mangalyaan is capable of:
  • Measuring the deuterium/hydrogen ratio (which gives an estimate into the amount of water loss into outer space)
  • Measuring the amount of methane in the atmosphere with a methane sensor.
  • Study ions during gas or liquid chromatography mass spectrometry and in this case used to examine the neutral composition of particles in the exosphere with a quadrupole mass analyser.
  • Measure temperature, emissivity, composition and mineralogy of the Mars surface with a thermal infrared imaging spectrometer.
  • Take pictures with a colour camera.
Mangalyaan communicates with the Indian Deep Space Network at Byalalu, a village at Ramanagara district (where Sholay was shot), that's just an hour's drive from Bangalore. 

Hope it won't be long before Mankind starts getting boarding passes to take themselves to Mars. Who knows...you might not even need a boarding pass. Someone might invent teleportation by then! :-)


Remember Neil Armstrong's "That's one small step for man, one giant leap for mankind" (click the link to listen to the historic dialogue). I wonder what dialogues the astronauts would be preparing, for their first step on Mars. Who would get to take the first historic step?

Wait and watch...

Monday, October 6, 2014

Mr.Somebody Else

Here's some creativity I'm sure you'd enjoy and would want to use...

One of the biggest challenges we've faced in the volunteering team at our workplace, was in helping people break out of the notion that they did not have the time to do any volunteering or that they would not be able to do anything meaningful if they volunteered. To add to that, there were so many volunteering groups who created a bad reputation for volunteering.

So if I don't feel the need to volunteer, who is going to help those who need help? Who helps the environment? Who fulfils the little responsibilities like switching off a light, turning off the tap of water, helping an old person across the street, throwing trash into the dustbin instead of onto the road, switching off the vehicle engine at the traffic signal etc.?

Somebody Else, of course!

So I'm proud to introduce to you today (although I created him two weeks ago), Mr.Somebody Else!


Mr.Somebody Else was an instant hit with my volunteering team! The cheerful, loveable, bald-headed grandpa figure who would be happy to do a good deed any day!

What my team didn't know at that time though, is that my initial ideas for Mr.Somebody Else, were to create a person who seemed responsible and no-nonsense. A square-jawed, sturdy-nosed, bespectacled face is what came to my mind at first. But those looked too serious, so I had even tried a bean-shaped, happy face to create a character whom people would be able to relate to more easily than the strict square-jawed face.

These are the other faces:


Would people be able to look-up to any of these faces as that of Mr.Somebody Else? What background story could these characters have?

A lightbulb moment gave me the idea of Mr.Somebody Else being a jolly old man who had spent his life doing the good deeds that others were actually supposed to do. And hence came into being, the cheerful person who when he narrates his story and his message, feels very saddened.

Here's a small part of his story and his message:

Hello my dear. People call me Mr.Somebody Else. I'm a rather important person in their eyes I guess.... Everytime a little act of kindness or a small responsibility is to be done, people expect me to do it. I've been happy to do these things, but I have grown very old now. If only people knew that it was possible, and very easy for them to do these things.... If not alone, then together as a team!
Below, I've outlined some of the many things you can volunteer to do to make this world  a better place. I sincerely hope that from now on, the phrase of "Somebody Else will do it" will change to "Let's not wait for Somebody Else to do it. Let us do it together!!!"


Attribution

The idea of Mr.Somebody Else and the content is covered by the Creative Commons License.


But with attribution, do feel free to use the concept of Mr. Somebody Else, preserving the same bald-headed, moustached, bearded, grandpa.


How you can help Mr.Somebody Else by volunteering:
(this list will be updated further with time. You can add suggestions in the comments section too)

Based on time availability
  • Once in three or four months: Donate your blood. Some good people you can contact are Sankalp India Foundation, Rotary Club, Lions Club or ask a nearby hospital if they have a blood bank. 
  • 6 hours a month on weekends: Join Mentor Together to give a child the best gift they can ever receive: A good education.
  • A few hours on a weekend: Join The Ugly Indians in a spot-fix. Register here for Bangalore and here for outside Bangalore.

Based on skill
  • Software programming: Help out an NGO with creating or maintaining their website or by creating a software application that will help make their day-to-day activities simpler (Eg: My department head Yatan sir went to teach children at an orphanage, and when the Sister there mentioned how difficult it was for them to maintain records of the donations they received, sir and a colleague, Ajit created a software interface with a database that kept track of the donations. A person at ThoughtWorks created a software that helped keep a tab on child mortality rates, for an NGO in the UK)
  • Graphic design or video editing: NGO's, social entrepreneurs and volunteering teams require good graphics, good videos and plenty of creativity at their disposal. A few hours of your help will help them a lot.

More info coming in due time....


Just remember, that one thing Mr.Somebody Else expects of you, is that you volunteer responsibly. Do some proper research about the field you're volunteering for, understand what volunteering is, understand what your responsibilities are, what the NGO's responsibilities are and Mr.Somebody Else assures you, you'll have a very fulfilling experience. Family, acquaintances and good friends who join in with the effort can make it even more worthwhile!



More on Volunteering


Sunday, October 5, 2014

Checking your website links during development

When creating a website, it can become very hard to check if all the links you created are valid and working. I know you may never make such errors, but some people do ;-)
Or, you're maintaining your website and you make a few changes that break some links. How do you check for these? You couldn't expect to manually click on each and every link....

Two tools come in handy here: Xenu and Windows Grep (or use it in Linux using Wine and grep).
Windows Grep is handy in case you want to grep any file names and locate the calling file. Besides, Windows Grep also supports regex. Linkchecker can also show you the calling file, but Xenu beats it hand's down, in terms of features.


Xenu in action

Give Xenu a web link or even point it to the index page of a website stored on your PC. Xenu shows you broken links, empty web pages, missing files, the depth of search (enter a link and move on to another link on that page and so on...), the number of links outgoing from a page, the number of links coming into a page, the error and the character-set. Right click on a row and you get options to show which file the report belongs to.
Under the covers, it's doing a graph traversal. Same way that the search engine bots crawl the internet.


Xenu even converts URL's to lower case if you wish (in Windows, www.Website.com is the same as www.website.com. But to Linux, they're different).


Crawling through links is assigned to multiple threads (although the real number of physical threads will be equal to the number of hardware threads in your processor. The extra threads will have to do some time-sharing).



Compared to all this, Linkchecker has a very limited functionality:



More

There are even commercial link checkers with more advanced functionality than Xenu. Some are:


Thursday, October 2, 2014

The Ugly Indian: Spotfixing at Richmond road

After the underground festival which was the 'cream of the cake' as I mentioned at the end of the blog post, today was finally a chance to put in the hard-work that cleaned up three spots at Richmond road, aka General KS Thimayya road. Although this one was to be a bit different, as the email from The Ugly Indian said:
"This spotfix will be a tribute to Gandhiji and will be different from any regular spotfix that you might have attended."
There was no painting done today, but if you're a first-timer to the next spotfix, it'll be helpful if you bring a few things along with you, else it'll be difficult. Here's the list, and the reason for it:

Checklist
  • Hand gloves. You'll be handling some very dirty, stinky garbage, and you'd actually want gloves that cover your arm beyond your wrist. The thicker the gloves, the better. You might have to handle shards of glass and stones which can tear the glove.
  • A mask. Most of the places cleaned up are also used as impromptu urinals, so you'd have to bear quite a stink.
  • Large garbage bags (the large black plastic ones you get to buy) or empty cement sacks (perhaps jute sacks would also be ok for dry garbage). A lot of the garbage has to be shifted to a spot where the BBMP truck can collect it. These bags help.
  • A tool that will help you clear thick, wet, layered garbage. A broom won't really work in this case. You'd need something like a spade or a mumtee (wonder who created the funny name 'mumtee'. I'm inventing a new name for it right now. Let's call it the "Angled spade". Costs approx Rs.120). For clearing garbage in tiny places, it'd help if you have a smaller tool too.
  • Clothes you don't mind dirtying. If the spotfixing involves painting, then clothes that you don't mind having paint falling on.
  • Shoes or boots. Don't wear sandals, as garbage is sometimes full of things that may cut your skin if you leave it exposed.
  • Cap or a hat. Cleaning up takes time, and the mid-day sun can be scorching.
  • Drinking water. Not just for drinking, but to be able to pour on a wound or a cut on your skin if it happens.
  • A small first-aid kit. You probably won't need this if you use the tools to clear the garbage instead of directly picking up garbage with your gloved hands.

Purchasing most of the above things won't cost you more than Rs.200. Considering that we spend more than that for an overpriced meal at fast-food joints and restaurants, this little bit of money is completely worth it for the cause it's being used for.


A wishlist for the organizers to consider
  • Notifications: Put in info about the event, in the page that's shown after the Google Form is filled up. This way you won't have to send a separate email, and even those who register late will get the right info.
  • Complete info: What will be accomplished on that day: Inform volunteers (in the notification itself or during the activity) of the tasks that will be accomplished, why it's being done and how it is to be done. Most volunteers end up just cleaning garbage and not knowing the bigger picture of how the BBMP was contacted, what precautions are to be taken etc. People will take initiative and keep volunteering, when they know what is going on and how and why it's done.
  • Followup: Send a followup email to participants, with info about the activity done, what remains to be accomplished and the approximate date when they can come to complete the activity.
  • Have a constant email id: Emails for various TUI activities sent from different email id's can get confusing.


The garbage clearing

I began at a corner used not only for dumping garbage, but also used as a urinal. A spot with an eternal stench and thick black layers of wet garbage with insects crawling and flying around. Definitely not for the faint-hearted, but we didn't really care. The mood among the masked crowd was positive and everyone wanted to get the work done, no matter what kind of garbage it was. People gave suggestions to each other to make the process more efficient, shared tools, took turns when someone got tired, cracked jokes about finding a gold-mine or dinosaur bones under the garbage and basically automatically organized the whole process together while motivating each other at the same time. The only instructions given to the crowd were, to keep the garbage in separate piles of debris (the stone and mud) and piles of 'other waste', and to form a chain of people who would pass on garbage to one-another. That's it. Everyone improvised, came up with their own ideas, and those who didn't bring tools, found thin slabs of stone which they used to dig up garbage and shift it to the pile. Nobody wanted to be standing around doing nothing.








The second spot, was near a transformer and an old water-fountain. The grass was cleared, stones lined up and the mud was firmed up by volunteers who stomped on it.






The third spot was at another transformer and the stretch of footpath that preceded it. The garbage at the transformer area is a lot messier than it looks in the pic. Layers and layers of garbage! Lift a stone slab and there's more garbage and insects under it! We were advised to stay away from the wires and to not go too close to the transformers. The muddy area at the edges of the footpaths after being cleared, would eventually be tarred and the footpath edges would be painted, on another day.

You'd be surprised at the number of garbage items that clog up the entry points of drains. No wonder our streets get flooded. I cleared up one such point too.





With the garbage piles ready, in came the cavalry with the garbage truck. We formed two chains of volunteers who passed the garbage on to each other using pans and eventually into the truck.




People nearby who were happy to watch volunteers do this work, sponsored soft-drinks for everyone.



And we took a much-needed rest, while the BBMP truck went about collecting the remaining piles of garbage and debris. We learnt that the debris went to a separate dumping spot where builders would make use of it. The remaining garbage would be sent off to the landfills.





An interesting detour

I took a walk around the place, searching for a tap to wash my hands, and a roadside vendor was more than happy to offer me a bucket of water and soap. She had been watching us, and was soon joined by an old lady who also worked at the roadside, who gratefully started telling me of how even her own children won't bother cleaning up a place. She was happy to see us - people who didn't actually have to do this work - braving the stench and cleaning up the place, expecting nothing in return.

After washing my hands, while walking back to the first corner where I started the cleaning (the place was now deserted, as the garbage was cleared and all volunteers had moved to the second spot), I noticed two men walking up to the corner, scanning the place like how the Predator scans for a dirt-covered Arnold.
These men, were the regulars who used to take a leak at the spot.
Both of them scanned the location, probably thinking "Damn it! Somebody cleaned up this whole place. Now where on earth do we go?!!!". And instead of going to the usual spot, they walked up to the place where the large pipes were piled up (see the initial pics) and took a leak there.


A spontaneous cleanup

While we were waiting, and the truck was collecting debris from spot 3, a few guys noticed that the road in front of us was covered in mud. This would basically be the mud that settles down after a heavy downpour. They picked up their brooms and started sweeping the area! Just like that. Nobody told them to do it.



A van driver passing by sarcastically asked them what they were doing. Initially, most of us also felt it was weird. But some other volunteers saw that the brooms weren't able to take out the mud, and picked up the angled spades to clear it. Even I realized that mud accumulated on the road is something I've personally had a dislike toward. I picked up an angled spade too, and joined these guys. Soon plenty more volunteers joined in, automatically formed the chain which passed on mud and we were totally surprised at the huge pile it became! The road was much cleaner than before, and we could even see the white paint that was earlier covered with mud.





We even tried to clear up what we thought was an entry point for water on the road to flow into the drain, but were sorely disappointed to see that it was filled with mud! :-)


In the end, the mud was transferred onto the truck, many volunteers climbed onto it for a group photo, there were cheers at the good work done and tools were distributed among the volunteers (many volunteers who left early, had left behind their tools and brooms) before we headed back home tired, but very happy!




p.s: By the way, whether you're new or not to such an initiative, try your best to stay till the end of the activity. Most people tend to leave early. Staying back not only ensures that you don't miss out on everything that happens (the spontaneous cleanup for example), it also ensures that you're there to help out till the end. A lot of helping hands are required for such an activity, and when some volunteers get tired, it helps if there are others to take on the work. 
Many girls were enthusiastic and mentioned that in-spite of all the work, they didn't feel weary. An experienced lady overheard them and smilingly told them that although they don't feel any muscle pain now, it'd start at night or the next day morning. Anyone who remembers their first few days at the gym will know what this means :-) But this is a good pain. It makes you stronger!