Saturday, December 13, 2014

Getting your head around d3.js [part2]

Continued from part1.

On observing d3 code, and on observing the output of the program, you'll notice the most confusing parts are that of enter(), data(), datum() and exit(). It just doesn't seem to fit into the programming syntax you're familiar with from other languages. Not to worry, it's just a matter of understanding the purpose of the syntax, and you'll be flying along.

If you wanted to draw a few circles with JavaScript, you'd probably do something like this:
<html>
<body>
<script>
var svgHolder = document.createElementNS("http://www.w3.org/2000/svg", "svg");
svgHolder.setAttribute("height",500);
svgHolder.setAttribute("width",700);
document.body.appendChild(svgHolder);

for(var i = 0; i < 10; i++)
{

var circles = document.createElementNS("http://www.w3.org/2000/svg", "circle");
circles.setAttribute("id", "c"+i)
circles.setAttribute("cx", (i+1)*50)
circles.setAttribute("cy", 100)
circles.setAttribute("r", (i+1)*10)
circles.setAttribute("stroke", "black")
circles.setAttribute("stroke-width", "3")
circles.setAttribute("fill", "red");
svgHolder.appendChild(circles);
}
</script>
</body>
</html>


To do the same thing in d3:

<html>
<head>
<script type="text/javascript" src="http://d3js.org/d3.v3.min.js"></script>
</head>
<body>
<script>
var cir = ["c1", "c2", "c3", "c4", "c5", "c6", "c7", "c8", "c9", "c10"];
var svgHolder = d3.select("body") //select the body tag
                .append("svg") //add an svg tag to body
                    .attr("width", 700) //specify the svg tag properties
                    .attr("height", 500);

svgHolder.selectAll("circle").data(cir).enter()
         .append("circle") //add an svg circle
                .attr("id", function(d) {return d;})
                .attr("cx", function(d,i) {return (i+1)*50;})
                .attr("cy", 110)
                .attr("r", function(d,i) {return (i+1)*10;})
                .attr("stroke", "black")
                .attr("stroke-width", "3")
                .attr("fill", "red");

</script>
</body>
</html>
 


I know what you're thinking:
"Hey, where did the for loop go? What does data() and enter() do? How did you manage to select all circles when none of them were yet created? What is d and i in those functions?"
We are just touching on the tip of the iceberg of the beautifully designed d3 syntax.


SelectAll
The cir array, basically acts as our specification of the loop limit. When you selectAll circle elements in the DOM (if they don't exist yet, d3 will create placeholders for them), you're also feeding each circle with data from the cir array using the data command.

What happens is, that for every circle that is created, d3 will assign the array values from cir to the placeholders one by one. So placeholder 1 will get the value "c1", placeholder 2 gets the value "c2" etc.


Enter and Data
The enter command tells d3 to take whatever commands come after enter, and attach them one by one to the placeholders. So the svg circles that get created with append("circle") will get attached to the placeholders that were created with selectAll("circle"). The number of placeholders to create is determined by the number of array elements in cir.
If you're dealing with only one element, you can use datum() instead of data().

It's that simple.


Anonymous functions and parameters d, i
Now, while specifying attributes for the circle with attr, you'd want to know something about which element of the array you're dealing with, right?
d3 allows you to use anonymous functions which accept zero to three parameters. If you specify the parameter as function(d), the value of d will be the array value of that particular placeholder.
Eg: For the first circle, d will be "c1" , for the second circle, d will be "c2" etc.

The second parameter is i as in function(d, i), which tells you whether the circle is the first one or the second one or third one etc. Similar to the i we use in for loops.

There's also a third parameter that you can get, which gives you data from a parent node, but let's not get into that for now.


Remove
In case you want to remove some of the circles, see how this works:
<html>
<head>
<script type="text/javascript" src="http://d3js.org/d3.v3.min.js"></script>
</head>
<body>
<script>

var cir = ["c1", "c2", "c3", "c4", "c5", "c6", "c7", "c8", "c9", "c10"];
var svgHolder = d3.select("body") //select the body tag
                .append("svg") //add an svg tag to body
                    .attr("width", 700) //specify the svg tag properties
                    .attr("height", 500);

svgHolder.selectAll("circle").data(cir).enter()
         .append("circle") //add an svg circle
                .attr("id", function(d) {return d;})
                .attr("cx", function(d,i) {return (i+1)*50;})
                .attr("cy", 110)
                .attr("r", function(d,i) {return (i+1)*10;})
                .attr("stroke", "black")
                .attr("stroke-width", "3")
                .attr("fill", "red");

var removethese = ["c1", "c2", "c3", "c4", "c5", "c9", "c10"];
svgHolder.selectAll("circle").data(removethese).remove();

</script>
</body>
</html> 

 
Interesting, isn't it?
You supply the array that you want to remove using data(), and d3 removes them from the DOM. But there's something wrong with how they got removed. Read on to find out.
btw, you could have also been specific, by typing  svgHolder.selectAll("circle").data(removethese).remove("circle");


Exit
The opposite of enter() though, is exit(). You can use it like this:

<html>
<head>
<script type="text/javascript" src="http://d3js.org/d3.v3.min.js"></script>
</head>
<body>
<script>

var cir = ["c1", "c2", "c3", "c4", "c5", "c6", "c7", "c8", "c9", "c10"];
var svgHolder = d3.select("body") //select the body tag
                .append("svg") //add an svg tag to body
                    .attr("width", 700) //specify the svg tag properties
                    .attr("height", 500);

svgHolder.selectAll("circle").data(cir).enter()
         .append("circle") //add an svg circle
                .attr("id", function(d) {return d;})
                .attr("cx", function(d,i) {return (i+1)*50;})
                .attr("cy", 110)
                .attr("r", function(d,i) {return (i+1)*10;})
                .attr("stroke", "black")
                .attr("stroke-width", "3")
                .attr("fill", "red");

var newcir = ["c1", "c2", "c3", "c4", "c5", "c9", "c10"];
svgHolder.selectAll("circle").data(newcir).exit().remove("circle");

</script>
</body>
</html>


Odd, isn't it? All you did is add an exit() between data() and remove(), and it worked exactly as the opposite of remove(). Instead of removing the values you supplied to data(), the values that were not supplied to data() got removed. d3 notices that the new array contains only 7 elements, and retains the first 7 placeholders, and the exit() function tells remove() to remove the remaining 3 placeholders.


A more accurate exit()
Noticed something else? When using just remove(), the first 7 circles got removed. When using it with exit(), the last 3 circles got removed. Both functions were not able to recognize that we had actually removed circles in-between. ie: circles "c6", "c7" and "c8".

To make d3 recognize which array value changed, you just have to supply an extra identity parameter to data().

<html>
<head>
<script type="text/javascript" src="http://d3js.org/d3.v3.min.js"></script>
</head>
<body>
<script>

var cir = ["c1", "c2", "c3", "c4", "c5", "c6", "c7", "c8", "c9", "c10"];
var svgHolder = d3.select("body") //select the body tag
                .append("svg") //add an svg tag to body
                    .attr("width", 700) //specify the svg tag properties
                    .attr("height", 500);

svgHolder.selectAll("circle").data(cir, function(d) {return d;}).enter()
         .append("circle") //add an svg circle
                .attr("id", function(d) {return d;})
                .attr("cx", function(d,i) {return (i+1)*50;})
                .attr("cy", 110)
                .attr("r", function(d,i) {return (i+1)*10;})
                .attr("stroke", "black")
                .attr("stroke-width", "3")
                .attr("fill", "red");

var newcir = ["c1", "c2", "c3", "c4", "c5", "c9", "c10"];
svgHolder.selectAll("circle").data(newcir, function(d) {return d;}).exit().remove("circle");

</script>
</body>
</html>


...and viola! enter() and exit() recognize the placeholders by the second parameter we supplied to data().

To conclude
D3.js is a splendid library which allows dynamic creation, removal and modification of DOM elements. It's initially difficult to understand the enter(), exit() and data() syntax, which is why I thought I'd add my own tutorial on the internet to help newbies. Hope this was helpful, and do make use of plenty of anonymous functions and avoid using for loops, because data(), enter() and exit() offer far more powerful ways of dealing with data, than any for loop can. You'll realize it only when you get into more complex ways of representing information on a page, so it's better to start correctly, and use this syntax right from the beginning to make full use of the power of d3.

All the best!



Some people have emailed me asking if they could thank me for having given them knowledge. The best way to thank me is by contributing to Open Source. Being a sweetheart if you'd like to give a more personal thank you, then I don't really like the idea of monetary donations, but  maybe a wishlist wouldn't be that bad.


LOL

Continued from the previous LOL page

Aliens and English
Share with this link




Corporate Hype
Share with this link




Continued in the next LOL page

Monday, December 1, 2014

What a Virtual SIM is

How would you like it if you had to use one computer for accessing your GMail account, another computer for accessing your Facebook account and yet another computer to do your internet banking? Sounds silly, doesn't it? But that's basically what you're doing with your phone.
To use a mobile number, you have a SIM card. To use another number, you have another SIM placed in a separate mobile phone. Dual SIM phones have eased that problem a bit, but what if somebody allowed you to pick up any mobile phone and log-in to the phone with your mobile number, make a call (the billing will be done to your mobile number) and logout of the phone?

Sounds impossible? Futuristic? Well, the future is already here!
In 2008, some brilliant people at Comviva, an Indian company (now owned by Mahindra) and recently, another UK-based company called Movirtu (now acquired by BlackBerry) created the Virtual SIM technology.
What it is

Virtual SIM (I'll be calling it VSIM from now on) is just an imaginary SIM that your mobile phone operator (Airtel, Vodafone, Tata Docomo) will keep in one of their computer databases. Think of it like your Facebook account that's stored on Facebook's database or your GMail account stored in Google's database.
  1. To use a mobile number, you first purchase a Virtual SIM mobile number (which will be owned by an operator like Airtel etc. and you would probably also have the option of getting a real SIM card with it). 
  2. Then either simply borrow somebody's phone (you can use any ordinary phone for this) or use a phone at a phone-booth, enter your VSIM's USSD code (which looks like this: *101# or *139*1*1234567890#) which is how you 'log-in' to your Virtual mobile number. 
  3. The mobile network will identify that the phone is now using a different number, and make a note of it in its VLR (Visitor Location Register) database and it will allow you to make calls using your mobile number and will bill you accordingly. 
  4. Type the same USSD code again, and you'll be logged out.

So basically with VSIM, a mobile phone would have a real SIM in it, and an operator like Airtel would be able to allow even a Vodafone VSIM number to login to the Airtel real SIM, by dynamically assigning the Vodafone number to the real Airtel SIM. This technology even keeps track of your missed calls when you're logged out.
With smart-phones, it wouldn't be difficult to keep multiple virtual numbers logged in at the same time. The implementation is dependent on the phone manufacturers and operators.


Another way of looking at VSIM

Yet another company, Implementa provides Virtual SIM technology in a slightly different way:
Here, the real SIM cards are retained by the mobile phone operator at a SIM storage facility and a SIM emulation adapter is placed in the mobile phone instead of the real SIM card. Using the software interface of the mobile phone, you'll be able to assign one of the real SIM's to the SIM emulation adapter in the phone.


Benefits to mobile operators

Market penetration being crucial for companies, VSIM's have the potential to reach people who do not have the money to afford mobile phones. If one among a community of people has a phone, the others can login to it and use it. Their only expense will be their talk-time.
Another market segment is that of companies which allow BYOD and COPE policies. An employee can use their own number for personal calls and log back into the company number to make official calls. Hopefully this would improve the privacy aspect of smart-phone monitoring too, if the phone gets monitored only when it uses the company VSIM.


I love it when innovative ideas hit the market!

Sunday, November 30, 2014

Corporate smartphone monitoring

First, let me get you up-to-speed on some terminology:
  • Bring Your Own Device (BYOD) : It's a policy in a company or institution which allows people to bring personally owned mobile devices (phone/laptop/tablet) and to use it to access company/institute applications/information.
  • Corporate Owned Personally Enabled (COPE): It's a policy which companies use to give employees mobile devices (phone/laptop/tablet) and can monitor and control the employee's activity to a large extent. Employees are also allowed to use the device for their personal use.
  • SIM: It's basically just an IC embedded in a plastic card (which you call SIM card) and it stores data used to identify and authenticate those who subscribe to mobile telephony (that's why it's called Subscriber Identification Module).
  • IT: The department of brilliant & hard-working people in companies/institutions, who setup and monitor the electronic and software infrastructure while at the same time, quickly identifying and fixing hardware and software problems that are reported.

Smartphone monitoring

Long back (and even today), even the garbage a person threw out of their house was examined by detectives who wanted to find out personal details of someone they were investigating (don't believe it? Have a look at what you throw into the garbage everyday).
Then came GMail and Facebook which have their algorithms for monitoring and analyzing your pictures and data and finding out intimate details about you.
But even these technologies are no match for what your smart-phone can reveal about you. Even Blackberry phone technologies have been cracked by the government.


BYOD

People are worried about BYOD because they think the company can monitor them. But an article on CIO says that not everything can be monitored. An employer will have much more important things to do (such as running their business), than monitoring personal information of their employees. It's like how Dan Brown's Digital Fortress mentioned that women needn't be worried that the NSA is going to spy on their emails and discover their secret recipe for preparing fruit jam.
However, certain surveys as mentioned in CIO, say that employees would be more comfortable if the employer clearly specified what they can and cannot monitor, and why they need to view that information. Some want it in writing, that their employer won't look at personal information.

BYOD is said to have resulted in data breaches in cases where an employee loses a phone and someone else accesses company information stored in the phone or if the employee leaves and company data is still in the phone.

Intellectual Property loss, litigations from employees about their private data and reimbursements are some of the hidden costs that an organization might incur because of a BYOD policy.

A survey also says that 44% of job seekers are more positive about an organization if it supports their mobile device.


COPE
It's a more secure and flexible alternative to BYOD, because it's a nightmare for a usually under-staffed IT department to monitor, keep track of and protect the mobile devices of employees (it's easier to impose company-wide policies). It also allows them to specify a limited set of permitted devices, which makes management easier.
The device being company-owned, the company can impose any restriction on which apps are allowed and can wipe data from a phone.

EMM solutions for COPE introduces the concept of "containerization" which enables organizations to create a separate partition which keeps corporate data isolated from personal data on mobile devices. This way, a data-wipe can be focussed on only erasing corporate information.

An organization should have a catalogue of allowed devices and app's, since allowing just about any device into the company network can be catastrophic.


Spy softwares

With softwares like MobileSpy (compatible with Android, iOS and Blackberry) on the other hand, a lot more can be monitored:
  • Screen: Viewing the actual phone's screen with a 90 second update rate.
  • Location: Locate the phone's position with GPS (the use of this feature takes up a good amount of battery, so it's almost never used, except for example, in cases where a company needs to monitor its truck-driver locations).
  • SIM info: Retrieve latest SIM information if the device is stolen or lost.
  • Wipe data & lock device: Can be done just by sending an SMS to the phone.
  • Text and messenger message logging: Every message is logged, even if deleted from the phone.
  • Social networking logs: Activity from Facebook and WhatsApp can be logged.
  • Youtube videos: Log which videos are watched.
  • Apps installed: Lets you see which apps are installed.
  • Web activity: All website URL's are logged.
  • App blocking: Access to certain apps can be blocked.
  • Photo log: All photos taken by the phone are logged and viewable.
  • Phone call info: Incoming and outgoing numbers are logged with duration and timestamp.
  • Email: All incoming and outgoing emails are saved.
  • Alerts: The person monitoring the device will be alerted when prohibited actions happen.
  • Contacts: Every existing and new contact is logged and saved.
  • Calendar events: Every event, date, time and location is saved.
  • Keylogger: MobileSpy is said to have it.
Another software called MobiStealth spy (for Android, iOS, Blackberry and Nokia) software provides:
  • Location monitoring
  • Blackberry messenger log
  • Text message and email log
  • Contact details
  • Call details
The software is said to be completely un-detectable, so children or employees cannot tamper with it.


Remotely controlling the phone

The person monitoring your phone won't be able to remotely activate the phone's microphone or camera. They won't be able to remotely switch-on your phone either, so nothing to be concerned about on this front. Your phone getting hacked or having a virus in it is a totally different situation though, where the hacker can control your phone. An employer obviously won't do such things, but an organization's IT department has to be on its toes when it comes to security updates and anti-viruses for the employee's phones.


Privacy

If you're concerned about privacy, the better and simpler option is to use the company smartphone only when you're working, and use a dual-SIM personal phone when you're not working. The dual-SIM feature will help you take work-related calls too, by transferring your company SIM to your personal phone temporarily.
Be aware though, that even though your company isn't able to track your personal information because you're not using the company phone, there are plenty of other companies that monitor your personal phone to collect details about you. If you don't want them to track you, turn off WiFi on your phone.

Sunday, November 23, 2014

Ophthalmology has a long way to go

What's most important for a salesperson? Making a sale.
For a business person? Profit.
An ophthalmologist? Salary or the patient's vision or both.
A patient? To see clearly.

I've worn spectacles for more than a decade, and never had any problems other than the usual increase in eye power which necessitated a change in spectacles every two or three years. During the past three years however, I've suffered and almost recovered from chronic eye strain and during this time, been to more than twelve ophthalmologists and got a much closer look at the business of corrective vision.

Disclaimer: I'm not a doctor, and these are my own opinions. Don't consider this as medical advice.

Some insights

Random advice: People will always have their recommendation of which hospital and ophthalmologist is best. Remember that most people just repeat what they've heard from others. Ask them specifically what is it about the ophthalmologist or hospital that is so good that they recommend it. Consider it only if you get good reasons.

The specialist myth: A doctor once told me that 95% of ailments can be cured in a general hospital. You don't need to see a specialist. A dermatology specialist once charged me Rs.400 for a consultation and prescribed Rs.300 worth of medicines for a tiny infection which when it happened the next time, a general physician at another hospital charged Rs.25 as his fee and prescribed a Rs.75 worth medicine which cured the problem. Same with the eyes. Remember that an eye speciality hospital has no other way of making money other than by treating eye problems. When I went to one, I was charged Rs.300 for registration, the ophthalmologist barely even checked my eye, prescribed an incorrect lens power and his speciality being dry eyes, he asked me to go to a nearby diagnostic centre (which I'm quite sure they have a tie-up with) and get my vitamin B12 levels checked, and to come back for another assessment within a week. When I went to a general hospital instead, I was charged Rs.25 for registration, the ophthalmologist examined my eyes thoroughly and prescribed an eye gel that gave me instant relief (as opposed to prescribing a new lens). I like it when I meet a doctor who really cares about the patient, rather than about the money. People speak of how eye speciality hospitals offer a free check-up the next time you visit, but think of this: At a general hospital, for two visits you spend Rs.25 + 25 = Rs.50. That's already six times cheaper than what you spend at the eye speciality hospital, so what's the point of the free visit?

Rest is necessary: Make sure your eyes feel rested and comfortable before you get your eye power checked. The best way to ensure this is to get an eight hour uninterrupted sleep at night and to get your eyes checked in the morning. Strain can make a difference of 0.25 to 1 dioptre. I've known ophthalmologists who recommend glasses for normal sighted people for whom the computerized eye test detects a -0.25 dioptre power. The power would just be because the person didn't get a full night's sleep, or because of using a computer all day. They don't really need spectacles.

Careful of cliches: The textbooks and the internet will tell you that one of the reasons for eye strain is astigmatism. Every ophthalmologist I went to, prescribed lenses for astigmatism. Even the computerized eye test showed astigmatism, although I couldn't tolerate lenses with astigmatism correction for more than 10 minutes. Only one experienced ophthalmologist listened to the symptoms I narrated and prescribed an eye gel which reduced the strain and recommended proper sleep (which reduced the strain over a course of many months). If you're experiencing dry eyes, it's not only sleep you need, but also at least 5 minutes of rest at least every 45 minutes. If you still feel strain, get more rest. Keeping your eyes closed is best while resting. Although eye exercises are good, I haven't found them to help in strain-related cases. Only rest helps.

Computerized eye tests are inaccurate: Each and every one of the computerized eye tests I've undergone at five hospitals, have given an incorrect reading (even at times I didn't suffer from strain). One hospital made it mandatory to do a computerized eye test because apparently a patient sued them for giving them an eye prescription which turned out to be different at a different hospital. So the computerized eye test was their way of saying "look, the machine detected your eye power, and it can't be wrong. Now you can't sue us". If only the patient had known the relation between strain and eye power they wouldn't have sued the hospital. I still don't understand why people depend on those machines. btw, even the lenses that an ophthalmologist or optician uses to assess your eye power, can be inaccurate. It's best to get your eyes checked at at least three different places before deciding what your eye power is, and purchasing spectacles. Do it during a single morning and see the difference in prescriptions you get from various hospitals. If you don't want to spend too much money, go to a medical college which does these checkup's for free.

If uncomfortable, don't wear the spectacles: Take this very very seriously. If wearing spectacles gives you a headache, strain or burning in the eye, remove it immediately and get a new one. Don't ask the optician/ophthalmologist for an opinion. They'll try their best to hide the fact that they made a mistake or that they prescribed the wrong lens/power for you. Once when one lens of my spectacles got twisted vertically (like in a pantoscopic tilt), I told the optician to fix it and they couldn't do it completely, as it could break the glass. He said "Try it for a week sir and see". It only got worse from then on. Lesson: don't wear spectacles that make you uncomfortable. Get a new pair. There are medical colleges where you can get spectacles made for less than one third of what will be charged at a store.

Glass lenses are definitely different from plastic lenses: Everyone will tell you that vision with a glass lens and plastic lens is the same. It isn't. A glass lens has much better visual acuity and will give your eyes lesser or no strain as compared to a plastic lens. I'm unsure why this happens, but have personally experienced it.

You don't have a pore in your retina: A new trend emerging in hospitals is to frighten a patient by saying that they might have pores in their retina if they use the computer too much, when the patient might have simply come for a simple eye checkup. This is almost always utter nonsense. Instead of getting it checked at that hospital, go to another hospital and get it checked if you really feel you might have symptoms of pores in your retina.

Proper sleep is necessary
Modern life has strangely encouraged children to wake up very early during exam days to study, or for working professionals to get stressed out and subject themselves to a lack of exercise and sleep deprivation. You'll see many of these people having red eyes, feeling sleepy many times during the day and losing their general lack of concentration. Many of these people could have unknowingly entered into the cycle of  polyphasic sleep, where even if they try, they won't get an eight hour sleep at night. Some just get four hours of sleep and wake up feeling refreshed. Then they feel sleepy again multiple times during the day. This is a dangerous situation for the eyes. It builds up strain slowly, and can become chronic.

Polyphasic sleep. Images from Wikipedia
Apart from the usual advice on getting proper sleep that you'll find on the internet (which didn't help me at all), there's this one I offer (which might help only some of you): Cold weather can be a problem. Keep yourself warm enough to get a full night's sleep.

The field of ophthalmology has a very long way to go before being able to cure people of vision related problems. Right now the best they can do is offer you pair of crutches for your eyes (the spectacles or contact lenses).


Heard the truth about Lasik?
The person who approved Lasik surgery in the U.S.A, is now campaigning against it. He asks: "How many eyes are you willing to ruin to make a living?".
When approved, they agreed to a 1% problem rate in Lasik surgeries. They hid the fact that in reality it was 20%. 



Lasik, contact lenses or just spectacles; it's like any other industry that survives on money. Use your discretion when you receive recommendations from opticians and ophthalmologists. Search for doctors who really care about the patient. Your eyes are important.

Saturday, November 15, 2014

Posting HTML, CSS and Javascript code in Blogger or WordPress posts

Something that surprises me is that although Blogger.com has extremely good tools to create blog posts, they still haven't catered to the need of web-developers who'd want to post code on their blog. A simple code tag, similar to the blockquote tag would have helped.

On pasting code about d3.js, and publishing the post, the code disappeared. After a lot of searching I found SimpleCode, created by Dan Cederholm. This nifty form allows you to paste your HTML code into it and it converts parts of the HTML syntax into their corresponding character codes which Blogger's parsers recognize. Try it out!




There are more such software


Getting your head around d3.js [part1]

Most people don't know that D3 stands for the three D's: Data Driven Documents (same way that C4ISR means Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance in our defence architecture).

Even to an experienced programmer, d3.js syntax can be very complex. But this complexity can also simplify a lot for us. We've been too accustomed to using for and while loops in programming. d3 offers a more elegant solution, in the same way that MATLAB offered matrix operations with a very clean and intuitive syntax, free of loops.

I had a hard time with d3, and decided to create a d3 tutorial for newbies.


The usual HTML, JavaScript and SVG
First, you should know that it's possible to draw on a web page, using SVG (Internet Explorer 8 and below don't support svg. Use Firefox or Chrome instead).

This code will draw a circle:

<html>
<body>
<svg height="100" width="100">
<circle cx="50" cy="50" fill="red" r="40" stroke-width="3" stroke="black">
</circle>

</svg>
</body>
</html>




Notice that some properties have been given to the circle. There's cx, cy, r, stroke, stroke-width and fill. The svg tag itself has been given a width and height, and the svg tag is within the body tag of html.

You should also know that JavaScript allows you to select elements in your DOM and apply properties to them. You can add "some text" dynamically in the tag, with this code: 

<html>
<body>
<p id="someID"></p>
<script>
var b = document.getElementById("someID");
b.innerHTML = "some text";
</script>
</body>
</html>



Now have a look at some d3 code which does pretty much the same thing:

<html>
<head>
<script type="text/javascript" src="http://d3js.org/d3.v3.min.js"></script>
</head>
<body>
<p id="someID"></p>
<script>
var abc = d3.select("body") //select the body tag
                .append("svg") //add an svg tag to body
                    .attr("width", 100) //specify the svg tag properties
                    .attr("height", 100)
                .append("circle") //add an svg circle
                    .attr("cx", 50) //specify properties of the circle
                    .attr("cy", 50)
                    .attr("r", 40)
                    .attr("stroke", "black")
                    .attr("stroke-width", "3")
                    .attr("fill", "red");

//If there's a tag with an id (see the top of this code), you can select it like this:          
var xyz = d3.select("#someID").text("some text");
             
</script>
</body>
</html>




If someID was specified as <p class="someID"></p>,you'd have to select it using d3.select(".someID");

Instead of statically creating an html tag for <p>, you can also dynamically create it like this:
var efg = d3.select("body")
                .append("p")
                    .text("some more text");


Simple enough?
All that "attr" syntax may look un-necessary at first, but I'll soon show you why it's helpful to have it that way, and I'll explain the concept of enter(), data() and exit() which bring in the real power of the d3 syntax.

 Continued in part2.

 

Saturday, November 8, 2014

NRecursions stats

Crossed ten thousand page views already!


The major traffic appears to be from people who need some help with technology and the answers are provided here. Bot visits also get counted, apparently. Considered going for Google Analytics, but the terms and conditions mentions:

"...the Service is provided without charge to You for up to 10 million Hits per month per account. Google may change its fees and payment policies for the Service from time to time...Any outstanding balance becomes immediately due and payable upon termination of this Agreement"

Although ten million hits seems far-fetched, the policy changes are scary.

I had installed ClustrMaps on May 2010, and it seems to provide a more accurate statistic.


Two thousand four hundred plus visitors. Other counters will even take into account the 'hits' to a page via RSS feeds etc. But ClustrMaps counts a visit only when a person actually visits NRecursions and the ClustrMaps icon gets rendered. So bots aren't counted. Only real visitors including myself.
If someone visiting from a specific IP address reloads the NRecursions page 20 times in one day, they will be counted as just 1 visit. This is more representative of the overall flow of visitors to the site. That's why I named it "Recursors". Real people recursively visit this blog :-)

Monday, November 3, 2014

LOL

Continued from previous LOL page.

Footwear and furniture
Share with this link


Continued in the next LOL page

Monday, October 20, 2014

Conference: Computer Society of India's National Conference on Formal Methods in Software Engineering


Turns out I'm just as comfortable with geeks, as I'm comfortable talking to entrepreneurs :) The 15th and 17th of this month witnessed a conference on Formal Methods in various sectors of software programming. This was the first workshop the Computer Society of India conducted in India, on the formal methods; at IISc Bangalore (some more info about the institute here). 

The stately IISc main building
The lush green campus at the IISc

The Computer Society of India was established in 1965. The Bangalore chapter is the 7th in India. It has 2700 members, 72 chapters and 600+ student chapters.

After the inauguration ceremony...




...we were introduced to the problems researchers face when they try to introduce Formal Methods to the industry. The only thing the industry is interested in, is in profits and product-performance. So if you try to introduce formal methods anywhere, you'll face resistance because of its complexity and time-consuming nature. How do you feed it to the industry then? You sugar-coat it.


Topics


General info on Formal Methods
By Dr.Yogananda Jeppu (Moog India)



Dijkstra said "Program testing can be used to show the presence of bugs, but never to show their absence!". Being 99% safe is not sufficient for a safety-critical system. Ideally, programmers shouldn't waste their time debugging; they shouldn't introduce a bug in the first place.
The very first build that Dr.Yogananda's team did on the LCA's code, they found a divide-by-zero error. He says it happens often in flight-critical code. It has even caused a ship to get stranded at sea. The whole control system failed and they had to lug the ship with another ship. Imagine what would have happened if it was war-time.

So why do bugs exist and what can be done about it?
It happens because of the language we write requirements in - "English". English sentences can be interpreted in many ways. People misunderstand requirements simply by assuming things. This is where formal methods come in.  

We require something more than English to define our system requirements. It's defining a complex system as a mathematical model broken down into little requirements where every requirement has a mathematical equation. Once you define that, it becomes easy for us to prove (using provers) that the system behaviour is right.

Formal methods aren't new. There were papers written on it in 1969 and 1975. There is a set of formal specifications where you write the requirements in a modelling language (perhaps a Z-language). The engineer works with a set of theorems. Eg: one of the theorems could be that you turn on a switch, electricity flows and a bulb should glow. But when you do the theorem proving, you'll find out that the bulb is fused and the light won't turn on. That's when you realize that it's something you missed mentioning in your theorem condition. Such proofs will come up when you try to prove conditions in a formal notation.

So we write an equation, run it and re-define the system behaviour. That's how formal methods are used. After that it's just a question of writing this mathematical equation in C code, compiling and running it.
  • Specify your system
  • Verify your system
  • Generate your code
  • And you're ready to fly! (it's not so easy though)


Non-acceptance of formal methods and semi-formalizing it

When testing is too difficult and if formal methods are not accepted in the industry due to complexity or a time-crunch, a lighter version of formal methods can be used - maybe just assertions, just a mathematical equation or for some part of the system you could use formal methods. For the Saras aircraft, they did mode transitions in a semi-formal way. Airbus is using it for low-level testing to check for the highest value of the integer or float or a worst case timing analysis. You can qualify the tool for checking such parameters.

Given that the DO-178C certification is necessary for avionics, there's a formal methods component released in Dec 2011 which can be used to comply with the certification requirements. Expressing requirements in the form of mathematics forces the person to ask questions and clarify the requirements.




Deductive verification of C programs
By Dr.Kalyan Krishnamani (NVIDIA)



Wikipedia defines deductive verification as "...generating from the system and its specifications (and possibly other annotations) a collection of mathematical proof obligations, the truth of which imply conformance of the system to its specification, and discharging these obligations using either interactive theorem provers (such as HOL, ACL2, Isabelle, or Coq), automatic theorem provers, or satisfiability modulo theories (SMT) solvers".

Deductive verification can be performed using Frama-C. It has a bunch of plugins for value analysis (what values the variables of a program can take to compute), coverage and other method plugins. You can write your own plugins too (like one to detect divide by zero). Plugins are written in the OCaml language (it's easy to learn, but writing optimal code in it is an art).

Why verification?

In 1996, the Ariane 5 rocket crashed because they used the same software as the previous generation rocket (they didn't bother to verify it), and while trying to convert a 64 bit int into a 16 bit int, the software shutdown due to an overflow error. The rocket had a backup software, in case the first software shutdown because of an error, but the backup software was the same thing, so that also shutdown and the rocket disintegrated (that's what rockets are programmed to do if something goes wrong).
That was long ago, but even today, verification methods are needed. More so, because software complexity has increased. Airbus and Dassault aviation use Frama-C (Framework for Modular Analysis of C programs) very often.

What it does: Frama-C gathers several static analysis techniques in a single collaborative framework. The collaborative approach of Frama-C allows static analyzers to build upon the results already computed by other analyzers in the framework. 

You can verify functions standalone. You don't need a main() function. It uses CIL (C Intermediate Language). It also uses ACSL (ANSI C Specification Language) as the front-end.

This is how verification would be done with ACSL, for a function that returns the maximum value of two values (the code in comments is the verification code. Note how it checks to ensure that the return value should be either x or y and no other number. Would you have checked for this condition if you wrote a test case?):

/*@ ensures \result >= x && \result >= y;
    
ensures \result == x || \result == y;
*/

int max (int x, int y) { 
return (x > y) ? x : y; }
Once a programmer defines such conditions, you don't need to do any documentation. The ASCL notation itself becomes the document/requirement specification for the next person who comes across the code. There's no miscommunication either, since it's mathematically expressed.
Through ASCL, you can use annotations, named behaviours, allocations, freeable, ghost variables, invariants, variant and inductive predicates, lemma, axiomatic definitions, higher-order logic constructions, lambdas and more.
Frama-C can also be used for regular testing, using plugins.


The architecture

As shown in the image below, the tool Krakatoa (which processes Java programs through the Why 2.30 tool) is the predecessor to Frama-C for Java (KML (Krakatoa Modelling Language) is the language used for writing the cide properties). These properties are fed into the Why tool and Why converts these properties into an intermediate language, Jessie (although now-a-days, a tool called WB is used instead of Jessie). Why then works on what Jessie generates and then Why generates Verification Conditions (VC). It's a bunch of assertions like mathematical formulae which will be translated to the various prover modules. Encoding (rather call it 'translator') takes the verification conditions and translates it into the different input languages for the provers. So if the verification conditions are satisfied, the properties (specification) you wrote are correct.
Architecture of certification chains: Frama-C, Why, Why3 and back-end provers

  • For Frama-C, the input is a C program. 
  • Deductive verification (the technique which, through a bunch of rules generates the VC's) generates the verification condition (so you don't even need to know deductive verification to start using Frama-C. You'll need to know a bit of Jessie though, when you need to debug). 
  • Then, provers like Simplify are then used to verify it.

For arrays, structures, unions etc., Frama-C already provides the commonly used memory models. It allows you to also specify your own memory model and also allows you to download any other model and integrate it with your system. They needed to introduce the feature of downloading any other module, because the next version of Frama-C might support C++. 


Hoare's triple

Deductive verification research is based on the Hoare's triple. You have a set of rules. The rules will dictate how to handle a variable assignment, how to handle a loop and so on. 

This is the Hoare's triple: {P} s {Q}

Where s can be any program fragment (if statement etc.). P is the pre-condition, Q is the post condition (P and Q are the pre and post conditions that the user writes; remember ASCL above?). If P holds true, Q will hold true after the execution of s.


Other tools

While Why is an excellent tool for a modelling or verification environment, other tools like B-method (used in the automotive industry), Key (used for Java deductive verification of the Paris automated metro trains. It generates loop invariants automatically), Boogie, Dafny, Chalice (for deductive reasoning) and COMPCERT (This tool can actually replace GCC. It's a certified C compiler which caters to strict coding guidelines which are necessary for avionics softwares. Eg: variables can't be used without first initializing them) are also helpful. Frama-C is not just a Formal tool. There are also coverage tools (for unit testing), Runtime error checking, data flow analysis and plugins for concurrent programs and more.

To know which tool is best suited to what application/field, ask the researchers who work in that field.




Refinement based verification and proofs of functional correctness
By Prof. Deepak D'Souza (IISc)


Prof. Deepak introduced us to the question of "How do you prove functional correctness of software?"
If you were to verify the implementation of a queue algorithm, one way to prove it is to use pre and post conditions. The other way is to prove that the implementation refines (provides the same functionality as) a simple abstract specification.

Hoare logic based verification can prove complex properties of programs like functional correctness and thus gives us the highest level of assurance about correctness (unlike testing techniques which do not assure us of a lack of bugs). The graph below shows the level of correctness that's required and the amount of correctness that's provided by the four techniques.

In the picture above, Frama-C is what uses an abstraction refinement technique, whereas bug detection is the normal testing done with ordinary testing tools.

Hoare's logic was checked using the VCC tool. For the max() function similar to what was proved using Frama-C, the logic here is written in VCC with this kind of syntax:

int max(int x, int y, int z)
_(requires \true)
_(ensures ((\result>=x) && (\result>=y) && (\result>=z)))

Again, there's another condition to be added here, which has to ensure that the result returned should be either x, y or z, and not any other number.

The condition is then sent to an SMP (Shared MultiProcessor) solver which says whether the condition is true or not.


Overall strategy for refinement
To verify a program, you have to use both techniques. First, Hoares technique where you use refinement language to generate annotations and then use the annotations in tools like Frama-C. Both techniques have to be used for verifying a program.




An industry perspective on certifying safety critical control systems
By Dr.Yogananda Jeppu (Moog India)


This talk was about mistakes we make in safety critical applications.

What is a Safety Critical Application? When human safety depends upon the correct working of an application, it's a safety critical application. Examples: Railway signalling, nuclear controllers, fly-by-wire, automotive domain (cars have more lines of code than fighter jets), rockets, etc.

Every industry has its own standard for safety (avionics, railways etc.). But accidents still happen and safety problems can cause products to be recalled (like Toyota which had a software problem which prevented the driver from applying from the brakes).

Dr.Yogananda proceeded to show us many examples of past incidents were shown, of how aircraft crashed because of software errors that showed the plane's elevation wrongly, people being exposed to too much radiation in a medical system because of a software error (in 2001 and 2008 which shows that programmers make the same mistakes repeatedly), trains crashing into each other due to sensors erroneously reporting that the trains were not in each others path.

In hindsight, all these errors could have been caught by the one test case that was not written. Fault densities of 0.1 per 1000 lines of code are experienced in space shuttle code. In the nuclear industry, the probability of an error is 1.14*10E-6 per hour of operation (what it actually should be is 10E-9 or lower, but the industry is nowhere near that).

Dormant errors

Once, an error of a tiny number; 10E-37 was found in an aircraft software after it lay dormant for twelve years. This was due to denormal numbers; the fact that floating numbers can be represented upto 10E-45 in PC's, but in some compilers numbers smaller than 10E-37 are treated as zero. So in one of the aircraft channels, the number was 10E-37 and in the other it was 10E-38. So one channel didn't work properly and the issue came to the forefront when an actuator had issues in a certain unique situation. Note that this happened after all the flight testing happened. Errors in filters and integrator systems are common in missile systems and avionics.

So code coverage tests are very important. Also check variable bounds by running many random cases and monitor the internal variables and collect max and min values. Formal method tools are there to do this. Many times, designers don't take verifiers observations into account. They say it's not necessary and talk about deadlines. Never generate test cases just to achieve code and block coverage. They only show locations of the code that aren't tested. They don't show deficiencies.

Nothing is insignificant in safety critical applications.


Standards

You have to know what kinds of problems can happen in a system before writing code. Standards help with this. The aerospace industry has standards like the D0178B. Coding takes just about quarter the time that verification takes, but verification is very necessary. Certification is a legal artefact that has to be retained by a company for 30+ years.


Model based testing and logical blocks

You can even use Microsoft Excel based equations as a model for testing. If model and code outputs match, we infer the code is as per requirements. Instrumented and non-instrumented code output should match very well with model output. You cannot rely on a tool that generates test cases.
For safety critical application, all logical blocks have to be tested to ensure modified condition / decision coverage (MC/DC).


Orthogonal arrays

Orthogonal arrays can be used in any field to reduce the number of tests. A free software for this is called "All Pairs". MATLAB has orthogonal array generation routines. Also, ACTS tool from National Institute of standards and technology called SCADE can help with certification.

Any safety critical tool has to be qualified. It's not an easy task and you shouldn't use a software without it being qualified for use. Hire someone to qualify it and test the limitations of the tool under load.


Testing tantra
  • Automate from day one. 
  • Analyze every test case in the first build. 
  • Analyze failed cases. Do a debug to some level. 
  • Eyeball the requirements. 
  • Errors are like cockroaches. You'll always find them in the same place (cockroaches are always found behind the sink). Eg: you'll always find problems with variable initialization. 
  • It's useful to have a systems guy close-by. 
  • Have tap out points in model and code.



Race detection in Android applications
By Prof. Aditya Kanade (IISc)

This talk was about formalizing concurrency semantics of Android applications.
Algorithms to detect and categorize data races and Environment Modeling to reduce false positives.

Droid racer is a dynamic tool developed by Prof. Aditya's team, to detect data races. It runs on Android app's and monitors the execution traces to find races in that execution. They have run it on applications like Facebook, Flipkart and Twitter, and found data races. It performs systematic testing and identifies potential races in popular apps.


The Android concurrency model

Dynamic thread creation (forking), locking, unlocking, join, notify-wait.
Android has an additional concept called asynchronous constructs: Tasks queues which start asynchronous tasks. Threads are associated with task queues. Also called event driven programming. A FIFO model, where any thread can post an asynchronous task to the task queue.
Eg: While you're playing a game on your phone, an SMS might arrive or your battery may go low and the phone would give you a warning message. 
Atomic execution of asynchronous tasks takes place. There's no pre-emption. Once a thread is assigned a task, the task is run to completion.

A typical Android thread creation and destruction sequence
In the above diagram, note how the system process and application processes are separate. Each process can have its own number of threads within it. Events are enqueued and threads like "t" are created which run the tasks in the task queue.

Android can also have single threaded race conditions.
Sources of non-determinism can be:
  • Thread interleaving
  • Re-ordering of asynchronous tasks

When checking for races you have to check for:
  • Forks happening before initialization of newly forked thread
  • Thread exits before join
  • Unlock happening before a subsequent lock and more

Droid Racer algorithm

It uses a graph of vertices and edges which creates what the professor refers to as a "happens-before ordering". To check for problems, they do a depth-first-search to pick two memory locations and check if they share an edge in the graph. If there's no edge, you've found a data race.


Data races are classified into...

...multi-threaded (non-determinism caused by thread interleaving)
and
Single-threaded data races. They can be...
  • co-enabled (where high level events causing conflicting operations are un-ordered)
  • cross-posted (non-deterministic interleaving of two threads posting tasks to the same target thread)
  • delayed (at least one of the conflicting operations is due to a task with a timeout. This breaks FIFO)

These were the data races they found in some popular applications

Droid Racer is available for download from the IISc Droid Racer website.




Probabilistic programming
By Dr.Sriram Rajamani (Microsoft)
 

Probabilistic programs are nothing but C, C#, Java, lisp (or any other language) programs with...

1. The ability to sample from a distribution
2. The ability to condition values of variables through observations

The goal of a probabilistic program is to succinctly specify a probability distribution. For example, how are chess players skills modelled in a program?

The program looks like this:

float skillA, skillB, skillC;
float perfA1, perfB1, perfB2,
perfC2, perfA3, perfC3;


skillA = Gaussian(100, 10);
skillB = Gaussian(100, 10);
skillC = Gaussian(100, 10);


// first game: A vs B, A won
perfA1 = Gaussian(skillA, 15);
perfB1 = Gaussian(skillB, 15);
observe(perfA1 > perfB1); 


// second game: B vs C, B won
perfB2 = Gaussian(skillA, 15);
perfC2 = Gaussian(skillB, 15);
observe(perfB2 > perfC2);


// third game: A vs C, A won
perfA3 = Gaussian(skillA, 15);
perfC3 = Gaussian(skillB, 15);
observe(perfA3 > perfC3);


return (skillA, skillB, skillC);

Skills are modelled as Gaussian functions. If player A beats players B and C, A is given a high probability. But if player A had beaten player B and then B beats A in the next game, we assume a stochastic relation between the probability and A's skill (which means that the program will be unsure and assign a probability which will not be too high on the confidence of whether A's skill is really good. Pretty much like how our brain thinks). Now if A beats C, then A gets a better rating but still there's a lot of flexibility in whether A is really better than B or not.

Ever wondered how scientists make predictions about the population of animals in a forest or calculate the probability that you'll be affected with a certain disease etc.? The Lotka Volterra population model is used here. It can take into account multiple variables for calculation and prediction. Well known models are that of the calculation of the population of goats and tigers in a forest, calculation of the possibility of kidney disease based on age, gender etc., hidden Markov models for speech recognition, Kalman filters for computer vision, Markov random fields for image processing, carbon modelling, evolutionary genetics, quantitative information flow and inference attacks (security).

Inference

Inference is program analysis. It's a bit too complex to put on this blog post, so am covering this only briefly.

The rest of the talk was about Formal semantics for probabilistic programs, inferring the distribution specified by a probabilistic program. Generating samples to test a machine learning algorithm, calculating the expected value of a function with respect to the distribution specified by the program, calculating the mode of distribution specified by the program.

Inference is done using concepts like program slicing, using a pre-transformer to avoid rejections during sampling and data flow analysis.




Statistical model checking
By Prof. Madhavan Mukund (CMI)

Model checking is an approach to guarantee the correctness of a system.

The professor spoke about stochastic systems where the next states depend probabilistically on current state and past history. Markov property and probabilistic transition functions.

Techniques for modelling errors
  • Randomization: Breaking symmetry in distribution algorithms
  • Uncertainty: Environmental interferences, imprecise sensors
  • Quantitative properties: Performance, quality of service
Two major domains of applications are in cyber physical systems (auto-pilot, anti-braking) and biological systems such as signalling pathways.


A simple communication protocol analysed as a Discrete Time Markov Chain

Consider a finite state model where you're trying to send a message across a channel and it either fails or succeeds.
The matrix P shows the probability of going from state to state in a finite state model. Every row adds to 1. The labels {succ} (meaning 'success') are label states with atomic propositions.

We're interested in:
  • Path based properties: What's the probability of requiring more than 10 retries?
  • Transient properties: What's the probability of being in state S, after 16 steps?
  • Expectation: What's the average number of retries required?
This talk focussed on task based properties.
Each set of paths can be measured. The probability of a finite path is got by multiplying the probabilities
A single infinite path has a probability of zero.

This way you can model and measure paths that fail immediately, paths with at least one failure, paths with no waiting and at most two failures etc.

Disadvantage of the stochastic method
  • Probabilities have to be explicitly computed to examine all states
  • You have to manipulate the entire reachable state space
  • It's not feasible for practical applications
The alternative - Statistical Approach

This approach simulates the system repeatedly using the underlying probabilities. For each run, we check if it satisfies the property. If c of N runs are successful, we estimate the probability of the property as c/N. As N tends to infinity, c/N converges to the true value.

The statistical approach helps, because simulation is easier than exhaustively exploring state space. It's easy to parallelize too. You can have independent simulations. The problem is that properties must be bounded (every simulation should operate within a finite amount of time), there can be a many simulations and the guarantees are not exact.


Monte Carlo model checking

This is performed using a Bounded Linear Time Logic as input and assuming a probabilistic system D. A bound t is computed, the system is simulated N times, each simulation being bounded by t steps and c/N is reported, where c is the number of good runs.


Hypothesis testing

In this technique, we define the problem as P(h) >= θ, call this as our hypothesis H, fix the number of simulations N and a threshold c.
Also have a converse hypothesis K where P(h) < θ.
After the simulation if more than c simulations succeed, accept H. Else accept K.

The possibility of errors are either a false negative (accept K when H holds and the error is bounded by α) or a false positive (accept H when K holds and the error is bounded by β).


This is the extensive sampling method which is not very practical. A better way to do it is shown in the graph below - by considering values from p1 to p0. It's the probability of accepting hypothesis H as a function of P with an indifference region.

  

 Other techniques involved are the single sampling plan, adaptive sampling, sequential probability ratio test based model checking, boolean combinations and nested properties.

Challenges for statistical model checking
  • If the probability of an event is very low, we need more samples for a meaningful estimate
  • Incorporating non-determinism is difficult because each action determines a separate probability distribution (as in the Markov Decision Process)
Tools you can use for statistical model checking are Plasma and Uppaal.




Formal methods and web security
By Prof. R.K.Shyamsundar (TIFR)


Information flow control is an important foundation to building secure distributed systems. I asked the professor about this, and he said that to implement this, you need an operating system which is capable of categorizing information packets as secure/trusted/untrusted etc.


Multi Level Security

Sensitive information is disclosed only to authorized persons. There are levels of classification and levels form a partially ordered set.
In an information system, the challenge is to build systems that are secure even in the presence of malicious programs.
There are two aspects to this: Access control and correct labelling of information packets (determining the sensitivity of information entering and leaving the system).


The Bell LaPadula Model

In this model, entities are files, users and programs.
Objects are passive containers for information.
Subjects are active entities like users and processes.

Untrusted subjects are assigned a label. They're permitted to only read from objects of a lower or equal security level.
Trusted processes are exempted from the requirements of the process.


Information Flow Control

This is mandatory for security. Confidentiality and integrity are information flow properties. Security is built into the design and you need to provide an end to end security guarantee.
Labels are assigned to subjects and objects. Read-write access rules are defined in terms of can-flow-to relation on labels.


Lattice model

A lattice is a finite ordered set with a least upper bound and lower bound. Using it, you can define reflexive, aggregation properties etc. It's suitable for any system where all classes need to be totally ordered.


The Biba model

Information can flow from a higher integrity class to a lower integrity class. It restricts a subject from writing to a more trusted object. La Padula is for information confidentiality (prevention of leakage of data).
Biba emphasizes information integrity (prevention of corruption of data).


Protecting privacy in a distributed network

This involves a de-centralized label model. Labels control information flow.

Trusted execution model: Enforcing data security policy for untrusted code. Eg: How do you know if your secure information isn't being transmitted outside by the antivirus program you installed?

In 1985, the Orange Book defined trusted computer system evaluation strategy.
The flaw of older systems were that they considered confidentiality and integrity to be un-related. Downgrading being un-related to information flow control was also a flaw.

Readers-writers labels: Uses the RWFM label format.

State of an information system: It's defined by subjects, objects and labels.
Downgrading does not impact ownership, but you are giving read access to others. It's easy to verify/validate required security properties.

Application security protocols: There are authentication protocols and security protocols for this. Eg: The Needham Schroeder Public Key Protocol implemented in Kerberos.
Also have a look at Lowe's attack on NSP.


Web security

For cross-origin requests (Eg: a computer accessing Gmail and youtube at the same time), one of the major problems is in inaccurately identifying the stakeholders.




Architectural semantics of AADL using Event-B
By Prof. Meenakshi D'Souza (IIIT-B)



The Architecture Analysis and Design Language is used to specify the architecture of embedded software. This is different from a design language. It speaks of components and how they interact.

Consider the cruise control system of a car:

Abstract AADL model
On refining it further, you get this:


AADL allows modular specification of large-scale architectures.

The architecture specifies only interactions. Not algorithms. AADL gives mode/state transition specification system. It allows you to specify packages, components, sub-components etc.

  • AADL lets you model application software (threads, processes, data, subprograms, systems), platform/hardware (processor, memory, bus, device, virtual processor, virtual bus)
  • Communication across components (data and event ports, synchronous call/return, shared access, end-to-end flows)
  • Modes and mode transitions
  • Incremental modelling of large-scale systems (package, sub-components, arrays of components and connections)
  • There's an Eclipse-based modelling framework, annexes for fault modelling, interactions, behaviour modelling etc.


Will you be able to use a formal method to check the property templates of AADL?
You have to. Because it has to be automated. AADL can't check an end-to-end latency on its own.
A formal method is needed. Same for bandwidth of the bus (in the diagram).


Event B 

It's a formal modelling notation based on first order logic and set theory. Refinements in Event-B are formalized.
Dynamic behaviours are specified as events.
The Rodin toolset has a decomposition plugin which breaks down a machine into smaller machines/components.

The number of proof obligations is huge in Event B for just a small set of AADL flows.

This is what an Event B program would look like:



This is how AADL is converted to Event B


Proving of architectural properties:
First, an Event B model corresponding to AADL model of the system is arrived at with manual translation.
The AADL model includes the system, devices, processes, threads, modes, buses, timing properties for thread dispatch, transmission bound on buses.
Global variables model the time and timing properties of threads in Event B.
End to end latency requirement of AADL is specified using a set of invariants of the overall model.
Correct latency requirements are discharged through 123 proof obligations, few of which are proved through interactive theorem proving.


Conclusion

It's possible to provide semantics of architectural components in AADL using Event-B. Such semantics will help designers to formally prove requirements relating to architecture semi-automatically. Professor Meenakshi's team is currently implementing a translator that will automatically generate and decompose incremental Event B models from incremental AADL models.