Wednesday, July 14, 2010

Venture Capital; Arjun Gupta

Business India, Oct 1-14, 2001,

A peak in the Valley

In the gloom and doom of the economic downturn, is Arjun Gupta another Vinod Khosla in the making?
Shivanand Kanavi

Brief Bio
Born: 1960
1977-80: BA Economics St Stephens College, Delhi
1980-82: BS 1982-84 MS Computer Science, Washington State University, Pullman
1984-87: System Soft Ware Engineer at Tektronix
1987-89: MBA Stanford University
1989-93: McKinsey & Co
1993-94: Kleiner Perkins
1994-97: The Chatterjee Group
1997-present: Founded Telesoft Partners


Arjun Gupta is a climber. He’s climbed with three major Himalayan expeditions to Thalaysagar, Arjuna and Trishul, all peaks above 20,000 feet, when he was hardly 18. Today at 41, he is climbing some new peaks in a place as far removed from the Himalayas as you can get — Silicon Valley.

Does a valley have peaks? It sounds like an oxymoron from a school grammar book. For with the gut wrenching dip in the tech business, there are, amazingly, a few bright spots and smiling faces left. Gupta is one. While many of his ilk have seen large parts of their portfolios vapourise, he’s laughing all the way to the bank, alongwith his partners in the Venture Capital firm, Telesoft Partners. He has successfully made deals worth almost $1.5 billion in the past few months, when there was a bloodbath in the markets, even before the terrorist attacks.

The broadband chip start-up Vxtel was sold in February 2001 to Intel for $550 million (all cash). Another chipmaker, Catamaran, was sold to Infineon – a semiconductor spin-off from Siemens – for $250 million in May 2001. Lara Networks was sold to Cypress Semiconductors in June 2001 for $225 million and Versatile, another chip startup, was sold to another chipmaker, Vitesse Semiconductors, for $267 million a week later. In August, Kyamata, an optical solutions firm, was acquired by Alcatel Optronics for $117 million. All this frenzied deal-making adds up to a cool $1.5 billion. Telesoft partners funded all these companies. Also, Vxtel, Lara, Catamaran and Versatile happen to have Indian founders.

Gupta grew up in Delhi and studied economics at St Stephens. He did not want to be an engineer or a doctor – the traditional career paths for bright teenagers in India. But the events that followed have made him believe in karma. After landing in Washington on a student exchange programme, he ended up doing BS in Computer Science in ’82 from Washington State University at Pullman and an MS in ’84. He then signed up for a PhD at the same University. As part of his MS, Gupta was converting peripherals into network resources. At the PhD level he took up the problem of Utopia that is load sharing—use whichever machine is available on the network. However, his doctoral ambitions were short-circuited by an offer at Tectronix, famous for their oscilloscopes.

“I got an opportunity to implement my thesis on their new digital Oscilloscope. Tectronix was ahead of the game from HP, they grew up to $1.5 billion and then went flat, but HP became a $ 40 billion company.

Tectronix did not take the risks. Silicon Graphics came to them to sell the idea of graphical workstations for $25 million, but they were not interested,” recalls Arjun of his engineering experience.

But Arjun was too much of an entrepreneur-techie. “I was a software engineer at Tectronix. We created a company called Computer Based Instruments, funded by Tectronix. But right after commercialisation they decided that the opportunity was too large to be left to “inexperienced hands” and brought it back into the parent company,” says Arjun. But the disappointment at Tectronix did not deter Arjun. He thought he should get a good grounding in business and hence armed himself with an MBA. He joined Stanford in 1987 and spent a summer at Morgan Stanley, in New York.

Investment banking did not excite Arjun. “It was interesting, but there were hardly half-a-dozen concepts in debt and equity and everything else was a permutation and a combination. It became repetitive,” he recalls. So, after his MBA, he joined McKinsey & Co. They started a new programme to enter the Silicon Valley. But the basic problem was their approach, which was that of a generalist. As a McKinsey consultant, one could do pharmaceuticals today, automobiles tomorrow and so on. Which is fine in mature industries where you apply basic business principles, but in a sector like technology, which is changing at a fantastic rate, it did not make sense. During those four years, Arjun worked full time for two years at Apple Computers and two years at Pactel, a wireless company. Qualcomm’s Irvin Jacob was then hanging around trying to convince Pactel that CDMA is the way to go, and Arjun wrote the first justification for Pactel to fund CDMA.

It did and Qualcomm took off.

Today, some people in the industry are saying that Arjun is another Vinod Khosla in the making. But Arjun himself suddenly turns reverential when you mention Khosla’s name: “Any comparison to Vinod is a total mischaracterisation. He is a giant with a mega firm (Kleiner Perkins). We are merely neophytes that are passionate about helping build great companies. At TeleSoft, I have certainly tried to apply the best practices I learnt from Purnendu Chatterjee, George Soros; Vinod Khosla and John Doerr at KP; Rajat Gupta and others at McKinsey; Bill Ford and Frank Quatrone at Morgan Stanley; and Atiq Raza and Raj Singh, who are friends and serial entrepreneurs.”

So, what are the lessons he picked up from the great and the good? “One definitive thing I learnt from Vinod is that if you really believe in something, then do it,” says he. Arjun’s passion was to start a Next Generation Communications fund and manage it himself. So, he decided to raise money on his own. He knew that without $75 million nobody would take him seriously. But that was a big sum to be raised for someone who was not known. Arjun did not give up. He announced a closing date for the fund and pursued investors relentlessly. “I talked to everybody: Philadelphia Teachers Association, PC, old clients, friends and family. I also found in those desperate days that almost every government in the world has a loan programme for small business, so we looked into that.” Arjun, unlike most Americans, was not deterred by bureaucracy and managed to get $40 million, and he finally ended up with $150 million instead of $75 million.

Thus, Telesoft-I raised $150 million, which returned a 450 per cent IRR! Having built a reputation and success rate raising $500 million for Telesoft-II was not very difficult in 2000. Today, Telesoft Partners has Alltel, Mannesmann and Vivendi investing in it and several institutions as well. “You have got to be smart and work hard but you need the network. If Vinod Khosla wants to get smart about a deal, he makes five calls. And at the end of three days, he will know everything there is to know about a particular technology. But because he has done it for many years, he will have a framework on how to fit it in. I wanted to create a network of Telecom companies. So we wanted carriers, corporate partners like Intel or Spectra Physics, and Institutions. We have proactively gone to semiconductor vendors, carriers and systems guys, and given a presentation on our companies. They choose the startup and we ask the startup people to make a presentation. If they like it, there will be a commercial agreement, an investment or even an acquisition. You don’t need investment bankers to sell a company,” says Arjun, networker par excellence. There are many VCs who talk about the keiretsu model, eco-system and so on, but few are actually able to implement it.

Today, Arjun is not cold calling any one. He has 500 individual investors. They know this is a new fund with fire in our belly. Telesoft’s managing fee is 2.5 per cent and carried interest is 25 per cent. The most reputed funds like Kleiner Perkins, have about 3 per cent managing fees and 30 per cent carried interest, while newcomers charge 2 per cent and 20 per cent respectively. Telesoft has already paid back the government and again took $100 million in Telesoft II. The government recognised that even though Telesoft is not small, it invests in small companies. Now many VC funds have realised it and are using government funds.

Arjun is probably the only VC who prints his successes and failures in his personal CV and in his company’s brochure. It boldly says there have been six write-offs recently and the fund has lost $20 million in those startups. “The probability of failure is very real, so let us get real. What used to take three calls for raising funds now takes seven calls. Today we would rather consolidate than do new investments,” says Arjun.

Arjun’s idealism has not gone unnoticed. Says Atiq Raza of Raza Foundries: “I have found Arjun to be focused, full of confidence and energy, working hard to understand the intersection of business and technology. He is thorough in his business practices. In addition, he is a great salesperson. He is also steeped in idealism. One day, he called me on my cell phone and described a project conducted by Stanford University to reduce military tension and nuclear risk between India and Pakistan. He also told me that he was going to provide 50 per cent of the funding and was inviting me to pick up the remaining 50 per cent. He thought it would be good if the project was funded jointly by an Indian expatriate and a Pakistani expatriate. I did not fully understand the project until later, but I had enough faith in Arjun’s values and judgement that I signed up.” That is tall praise coming from Atiq, a pioneering entrepreneur in the chip industry.

Rajvir Singh, founder of StratumOne, Sierra Networks and Cerent (acquired later by PMC Sierra, Redback Networks and Cisco respectively for a valuation of $12 billion) says: “Arjun will go places. I know him for a long time. He has built a very nifty organisation that is tightly managed by him as CEO. This style has some advantages and some disadvantages as well. He is very hard working and truthful and well connected with telecom carriers. His partner and MD, Yatin Mundukar, provides him great field support.”

Truly, Arjun Gupta is climbing new peaks even when the rest of the industry is in a valley. How has this been possible in a serious economic downturn? Is Arjun a hustler, just a smart deal maker? Obviously, Arjun is a very good networker, but that is not all. He has absorbed what he learnt at the Himalayan Mountaineering Federation in his teen years. After all, mountaineering involves strategic daring and tactical caution, long-term planning of details and short-term flexibility, ability to not only reach the summit but to get back to the base camp safely. And above all, grit and determination.

Tuesday, July 13, 2010

Venture Capital: Vinod Khosla

Business India, Jan 22-February 4, 2001
Vinod Khosla
Shivanand Kanavi

“You can put my name in any search engine and you will get enough material on me, and I have said whatever I have to say in most of my interviews. So you can dispense with the usual questions and fire away”, said Vinod Khosla, when we met him in his office at Sand Hill Road at Menlo Park. The words were not tinged with arrogance but were a genuine attempt at getting to the core issues quickly.

That is how Vinod has made his famous picks : Juniper Networks, Cerent, Sierra, Redback and more. Which, according to Fortune, have made over $16 billion for KPCB, thereby making him “the most successful VC of all times”. Clearly he has gotten to the core of the next generation of networking.

Vinod is famous for his brevity. Rajvir Singh, who has become a fountain of Optical start-ups, recalls how the first thing Vinod advised him in an e-mail when he invested in Fiberlane (later split into Cerent and Sierra) was: "Keep the B.S. out of all communication".

We say amen to that, and give below a few notes from our conversation, albeit pared with Occam's razor:

Money: In 10 years I have never done a rate of return calculation. I have only looked at economic contribution. After all, if you have made economic contribution, then money will come anyway. Many people talk about how much they will be worth. I reject all those who only talk about money. That is Wall Street mentality. It goes against my intellectual curiosity, predicting trends and so on.

Venture Capitalism: It is all about helping entrepreneurs build companies. Juniper is a classic example. When Pradeep Sindhu came to me, he had no business experience. I guided him in building Internet routers and then helped him find the team; I helped him find Scott Kriens. All these things are really hard to do if you are just an engineer, because you have never done anything like this. What we do is help make an idea into a company. It is like being a coach for a soccer team or a football team.

Startups: I do not miss being in a startup myself. It is a lot of work and you get stuck in one area. Technology is moving rapidly in so many areas and I have interest in so many areas. Every two to three years I completely change the area I am investing in. I take a few months off to learn the whole technology and develop a vision of what the world is going to be like - it is literally going back to school – then start investing.

Current interests: Whether it is optical components, which is physics and material science or enterprise software, the only way to do it is to take three months off, learn and come back. My position lets me do it. I have got curiosity. I change my interests regularly when I get bored.
All three of my degrees are in completely different areas. Right now, as hobbies, I keep up with string theory and evolutionary biology.

Big vs small companies: It is not big vs small. People who refused to take risks are losing. Lucent had more talent than Nortel. But Nortel has changed: they have absorbed entrepreneurial culture. Lucent has wrong acquisition strategy and wrong culture. People don’t leave Cisco when it acquires, but they do when Lucent does. It is much harder for big companies but Nortel has done it.

Optical Networking: In both optical and wireless, valuations are hyped and over-hyped. But if you look at the impact they are going to have on society, on the way business is going to be run and so on, then they are underestimated. Investors are like lemmings; suddenly they go from greed to fear.

Indian entrepreneurs: The stockmarket is not a good indicator. Some have built businesses but some have built market caps. It is a bad value system. Issue is what you can create that has lasting value. Desh has real revenue. I like what Desh did. In the end his value will be judged if he makes the economic contribution. That is what Pradeep is doing. Intel, Sun, Dell, Microsft, Oracle all made contributions.

Education in India: A country of the size of India, a billion strong, does not have a major university which is world class and which is leading in research so that it does not have to depend on all the research in US. You have to take a 50-year view of this, not five to ten years. Over the long haul, India has the talent, language (English), enough infrastructure. It will grow in a very, very big way in the knowledge economy. Hopefully, people from all over the world will go to India to do research. That is the genesis of my interest in Global Institutes of Science and Technology.

Role models: I was 15-16 and living in Dehi Cantonment, as my father was in the army. I used to go to Shankar market and rent old issues of trade journals in electronics, which you get free there. I read about Intel being started up by a couple of engineers. That was my dream long before I went to IIT. In 1975, even before I finished IIT, I tried to start a company. Those days in India, it was not possible if your father did not have connections. That is why I resonate with role models. Andy Grove and Intel became role models for me.

Vinod Khosla loves travel and photography. Blown up pictures of his children taken by him are all over his office.

Thursday, June 24, 2010

Sand to Silicon, Internet Edition-7

EPILOGUE

THE COLLECTIVE GENIUS

“The process of technological developments is like building a cathedral. “Over the course of several hundred years: people come along, lay down a block on top of the old foundations, saying, ‘I built a cathedral.’ Next month another block is placed atop the previous one. Then a historian asks, ‘Who built the cathedral?’ Peter added some stones here and Paul added a few more. You can con yourself into believing that you did the most important part, but the reality is that each contribution has to follow on to previous work. Everything is tied to everything else.
Too often history tends to be lazy and give credit to the planner and the funder of the cathedral. No single person can do it all, or ever does it all.”

—PAUL BARAN, inventor of packet switching

Baran’s wise words sum up the pitfalls in telling the story of technology. Individual genius plays a role but giving it a larger-than-life image robs it of historical perspective.

In India, there was a tradition of collective intellectual work. Take, for instance, the Upanishads,† or the Rig Veda;‡ no single person has claimed authorship of these works, much less the intellectual property rights. Most ancient literature is classified as smriti (memory, or, in this case, collective memory) or shruti (heard from others). Even Vyasa, the legendary author of the Mahabharat, claimed that he was only a raconteur. Indeed, it is a tradition in which an individual rarely claims “to have built the cathedral”.

When I started researching this book, the success of Indian entrepreneurs in information technology was a well-known fact. As a journalist, I had met many of them, but I was curious to know which of them had contributed significantly to technological breakthroughs. While tracing the story of IT, I have also cited the work of several Indian technologists without laying any claim to completeness.

Nobody doubts the intellectual potential or economic potential of a billion Indians. However, to convert this potential into reality we need enabling mechanisms. The most important contribution of IT is the network. The network cuts across class, caste, creed, race, gender, nationality and all other sectarian barriers. The network, like all other collectives, creates new opportunities for collaboration, competition, commerce, cogitation and communication. It can inspire the collective Indian genius.

The hunger for opportunity, for knowledge, for change, is all there. I have seen it in the cities and villages of India. The political, intellectual and business elite of this country should break the barriers of the current networks of millions and build a network of a billion.

This is the call of the times: Hic rhodus, hic salta – Here is the rose, now dance!


__________________
†Ancient Hindu philosophical texts that summarise the philosophy of the Vedas.
‡Considered the oldest Hindu scripture, carried forward for centuries through oral tradition.
_________________________


The Author: Shivanand Kanavi can be contacted at skanavi@yahoo.com

Tuesday, June 22, 2010

Sand to Silicon, Internet Edition-7

The Internet

“Great Cloud. Please help me. I am away from my beloved and miss her very much. Please go to the city called Alaka where my beloved lives in our moonlit house”

—From Meghadoot (messenger cloud) of Kalidasa,
Sanskrit poet, playwright, fourth century AD

I am sure the Internet is on the verge of taking off in India to the next level of usage. I am not making this prediction based on learned market research by Gartner, Forrester, IDC or some other agency, but on observing my wife.

While I was vainly trying to get her interested in the PC and the Internet, I downloaded pages and pages of information on subjects of her interest. She thanked me but refused to get over her computer phobia. Not that she is anti-technology or any such thing. (In fact, she took to cell phones and SMS faster than me and showed me all kinds of tricks on her cell phone.) But, whenever I managed to bring her to the PC and turned on the Internet, she would say, “Who can stand this ‘World Wide Wait’?!”, and I would give up.

But a sea change is taking place in front of my eyes. After our software engineer son went abroad to execute a project for his company, she picked up chatting with him on Instant Messengers and was glad to see him ‘live’ on the Webcam. Now, every day, she is learning something new and singing hosannas to the Internet.

Perhaps the novelty will wear off after some time, but she has definitely gotten over her computer phobia. According to her, many of her friends are learning to use the Net.

Based on this observation, I have concluded that the Internet is going to see a burst of new users from India. I am certain that if all the initiatives that are being taken privately and publicly on bridging the Digital Divide between those who have access to the Net and those who do not are pursued seriously, then we might have over 200 million Internet users in India alone in ten to fifteen years.

That is a bold prediction considering that there are hardly 10 million PCs at the moment and 40 million telephone lines and the estimate of Internet users varies. Widely different figures from 3 million to 10 million are quoted. Nobody is sure. Like newspapers and telephones, Internet accounts too are heavily shared in India. In offices and homes, several people share single Internet account. And then there are cyber cafes too.

The Internet has become a massive labyrinthine library, where one can search for and obtain information. It has also evolved into an instant, inexpensive communication medium where one can send email and even images, sounds and videos, to a receiver, girdling the globe.

There are billions of documents in the Internet, on millions of computers known as Internet servers, all interconnected by a tangled web of cables, optic fibres and wireless links. We can be part of the Net through our own PC, laptop, cell phone or a palm-held Personal Digital Assistant, using a wired or a wireless connection to an Internet Service Provider. There are already hundreds of millions of users of the Internet.

You might have noticed that I have refrained from quoting a definite figure in the para above and have, instead, used ballpark figures. The reason is simple: I can’t. The numbers are constantly changing even as we quote them. Like Jack’s beanstalk, the Net is growing at a tremendous speed.

However, one thing we learn from ‘Jack and the Beanstalk’ is that every giant magical tree has humble origins. The beans, in the case of Internet, were sown as far back as the sixties.

It all started with the Advanced Research Projects Agency (ARPA) of the US Department of Defence. ARPA was funding advanced computer science research from the early ’60s. J.C.R Licklider, who was then working in ARPA, took the initiative in encouraging several academic groups in the US to work on interactive computing and time-sharing. We saw the historical importance of these initiatives in the chapter on computing.

One glitch, however, was that these different groups could not communicate their programmes or data or even ideas with each other easily. The situation was so bad that Taylor had three different terminals in his office in the Pentagon connected to three different computers that were being used for time -sharing experiments at MIT, UCLA and Stanford Research Institute. Thus started an experiment in enabling computers to exchange files among themselves. Bob Taylor played a crucial role in Information Processing Technology Office of ARPA in creating this network, which was later named Arpanet. “We wanted to create a network to support the formation of a community of shared interests among computer scientists and that was the origin of the Arpanet”, says Taylor.

ARPANET WAS FOR COMMUNICATION

What about the story that the Arpanet was created by the US government’s Defence Department, to have a command and control structure to survive a nuclear war? “That is only a story, and not a fact. Charlie Herzfeld, who was my boss at ARPA at one time, and I have made several attempts to clarify this. We should know, since we initiated it and funded Arpanet,” says Taylor.

Incidentally, the two still remain good friends. When US president Bill Clinton awarded the National Technology Medal to Bob Taylor in 2000 for playing a leading role in personal computing and computer networking, Charles Herzfeld received the award on his behalf, since Taylor refused to travel to Washington, D.C.

It is a fact, however, that the first computer network to be proposed theoretically was for military purposes. It was to decentralize nuclear missile command and control. The idea was not to have centralized, computer-based command facilities, which could be destroyed in a missile attack. In order to survive a missile attack and retain what was known, during the US-Soviet Cold War, as ‘Second Strike Capability’, Paul Baran of Rand Corporation had proposed the idea of a distributed network. In those mad days of Mutually Assured Destruction, it seemed logical.

Baran elaborated his ideas to the military in an eleven-volume report ‘Distributed Communications System’ during 1962-64. This report was available to civilian research groups as well. However, no civilian network was built based on it. Baran even worked out the details of a packet switched network, though he used a clumsy name, ‘Distributed Adaptive Message Block Switching’. Donald Davies, in the UK, independently discovered the same a little later and called it packet switching.

“We looked at Baran’s work after we started working on the Arpanet proposal in February 1966”, says Taylor. “By that time, we had also discovered that Don Davies in the UK had also independently proposed the use of packet switching for building computer networks. As far as we were concerned, the two papers acted as a confirmation that packet switching was the right technology to develop the ARPA network. That is all. The purpose of our network was communication and not ballistic missile defence”, asserted Taylor passionately to the author. After his eventful innings at ARPA and Xerox PARC, Taylor is now retired to a wooded area not too far from the frenetic Silicon Valley.

DECENTRALISE

There was still a problem. How do you make Earthlings talk to Martians? Just kidding! Don’t worry, I am trying to explain the difficulty in making two computers communicate with each other when two different manufacturers with different operating systems and software build them, as was done then. Without any exaggeration, the differences between ARPA computers at MIT and UCLA or Stanford Research Institute were as vast as that between Earthlings and Martians and assorted aliens.

“The problem was solved brilliantly by Wes Clark”, says Bob Taylor. “He said let us build special-purpose computers to handle the packets, one at each ‘host’ (as each ARPA computer was known at that time). These special computers known as Interface Message Processors (IMPs) would be connected through leased telephone lines. ARPA would ask a contractor to develop the communication software and hardware for the IMPs while each site would worry about developing the software to make their host computer talk to the local IMP. Since the research group at each site knew their own computer inside out, they would be able to do it. Moreover, this approach involved many scattered research groups in building the network and not remain as were users of a network”, says Taylor, well-known as a motivator and a subtle manager. Thus, the actual network communication through packets took place between standardized IMPs designed centrally.

At one stroke, Wesley Clark had solved the problem of connecting Earthlings to Martians using IMPs. As it turned out, Donald Davies had also arrived at the same conclusion in the UK but could not carry it further since the UK did not have a computer networking project at that time.

As a management case study, the execution of Arpanet is very interesting. A path-breaking initiative involving diverse elements is complex to execute. The skill of management lies in separating global complexity from local and centralising global complexity while decentralizing the resolution of local complexity. The execution of Arpanet was one such. Mature management was as important as the development of new technology for its speedy and successful build-up.

POST OFFICES AND PACKET SWITCHING

What is packet switching? It is the single most important idea in all computer networks, be it the office network or the global Internet. The idea is simple. In telephone networks (as we saw in the chapter on telecommunication), the two communicating parties are provided a physical connection for the duration of the call. The switches and exchanges take care of that. Hence, this mode of establishing a communication link is called ‘circuit switching’. If, for some reason, the circuit is cut due to the failure of a switch or the transmission line getting cut or something similar, then the call gets cut too.

In advanced telephone networks, your call may be routed through some other trunk line, avoiding the damaged section. However, this can take place only in a second try and if an alternative path is available. Moreover, a physical connection is again brought into being for the duration of the call.

In packet switching, however, if the computer is trying to send a ‘file’ (data, programs or email) to another computer, then it is first broken into small packets. The packets are then put inside ‘envelopes’ with the address of the computer where they are supposed to go along with the ‘sequential number’ of the packet. These packets are then let loose in a network of packet switches called Routers.

Each router, which receives the packet, acts like a sorter in a post office who reads the address, does not open the envelope and sends it on its way to the right destination. Thus the packet reaches the destination, where it is used in the appropriate order to reassemble the original message.

Of course, there is an important difference between the postal sorter and an Internet packet switch or router. A postal sorter in Mumbai will send a letter addressed to Kumbhakonam in a big bag to Chennai along with all the letters going to Tamil Nadu. The Chennai sorter will then send it to Kumbhakonam. The sorter in Mumbai will never (we hope!) send it to Kolkata. But a network router might do just that.

The router will see if the path to Chennai is free; if not, it will send it to the router in Kolkata, which will read the Kumbhakonam address; but if the Kolkata-Chennai path is not free, then it might send it to Bangalore; and so on. In short, routers have the intelligence to sense the traffic conditions in the network, continuously update themselves on the state of the network, decide on the best path at the moment for the packet and send it forward accordingly.

This is very different from the method used by postal sorter. What if congestion at Chennai leads to packet loss? Moreover, what if a link in the chain is broken? In the case of the postal system, it might take a long time to restore the service, but in the case of packet switching, if a router goes down, then it does not matter—the network will find some other path to send the packet. Despite all this, if the packet does not reach the destination, then the computer at the destination will ask for it to be sent again.

You might say this does not look like a very intelligent way of communicating, and you would not be alone; the whole telecom industry said so. Computer networking through packet switching was opposed tooth and nail by telecom companies and even those with cutting-edge technologies like AT&T! They suggested that leased lines be used to connect computers and that was that.

'PACK IT UP', SAID AT&T

Networking pioneers like Paul Baran, Bob Taylor, Larry Roberts, Frank Heart,Vinton Cerf, Steve Crocker, Bob Metcalfe, Len Kleinrock, Bob Kahn and others have recalled, in several interviews, the struggle they had to go through to convince AT&T, the US telephone monopoly of those days.

AT&T did not believe packet switching would work, and that, if it ever did, it would become a competing network and kill their business! This battle between data communication and incumbent telephone companies is still not over. As voice communication adopts packet technology, as in Voice Over Internet, the old phone companies all over the world are barely conceding to packet switching, kicking and crying.

This may look antediluvian, but forty years ago, the disbelievers did have a justification: the computers required to route packets were neither cheap enough nor fast enough.

The semiconductor and computer revolution has taken care of that and today’s routers look like sleek DVD or VCR players and cost a fraction of old computers. Routers are actually very fast, special-purpose computers; hence packets take microseconds to be routed at each node.
The final receiver is able to reassemble the message with all the packets intact and in the right order in a matter of a few milliseconds. Nobody is the wiser about what goes on behind the scenes with packets zigzagging through the network madly before reaching the destination.

In the case of data communication, making sure that none of the packets have been lost, and not necessarily the time taken, becomes more important. For example, if I send the publisher this chapter through the network, I do not want several words, sentences or punctuations missing or jumbled up taking the Mickey out of it. However, if I am sending a voice signal in packetised form, then the receiver is interested in real-time communication even at the cost of a few packets. That is the reason the voice appears broken when we use Instant Messengers to talk a friend on the Net. However, with constant R&D, the technology of packet switching is improving and one can carry out a decent voice or even video communication or listen to radio or see a TV broadcast on the Internet.

WHY NOT CIRCUIT SWITCHING?

The main advantage of using packet switching in computer communication is that it uses the network most efficiently. That is why it is also the most cost-effective. We have already seen the effort telecom engineers make to use their resources optimally through clever multiplexing and circuit switching; so why don’t we just use telephone switches between computers?

There is a major difference between voice and data communication. Voice communication is more or less continuous, with a few pauses, but computer communication is bursty. That is, a computer will send megabytes for some time and then fall silent for a long time, and we would be blocking a telephone line and that, too, mostly long-distance lines, thereby making it very expensive.

Suppose you have a website on a server and I am visiting the website. I find a file interesting and I ask for it through my browser. The request is transmitted to your server and it sends the file in a few seconds. Then I take fifteen minutes to read that file before making another request. Imagine what would happen if I were in Mumbai and your server was in Milwaukee and we were circuit switched? An international line between Mumbai and Milwaukee has to be kept open for fifteen minutes, waiting for the next request! If the Internet were based on circuit switching, then not only would it be expensive but also just a few lakh users would tie up the entire global telephone networks.

Using ARPA funds, the first computer network based on packet switching was built in the US between 1966 and 1972. A whole community of users came into being at over a dozen sites, and started exchanging files. Soon they also developed a system to exchange notes and they called it ‘e-mail’ (an abbreviation for electronic mail). Abhay Bhushan, who worked in the Arpanet project from 1967 to 1974 was then at MIT and wrote the note on FTP or File Transfer Protocol, the basis of email. In those days, several theoretical and practical problems were sorted out through RFCs, which stood for Request For Comments –a message sent to all Arpanet users. Any researcher in a dozen ARPA sites could pose a problem or post a solution through such RFCs. Thus, an informal, non-hierarchical culture developed among these original
Netizens. “Those were heady days when so many things were done for the first time without much ado,” recalls Abhay Bhushan.

WHO PUT @ IN EMAIL?

A form of email was already known to users of time-sharing computers, but with Arpanet coming into being, new programs started popping up for sending mail piggyback on File Transfer Protocol. An email program that immediately became popular due to its simplicity was sendmsg, written by Ray Tomlinson, a young engineer at Bolt Beranek and Newman (BBN), a Boston-based company, which was the prime contractor for building the Arpanet. He also wrote a program called readmail to open the email. His email programs have obviously been superseded in the last thirty years by others. But one thing that has survived is the @ sign to denote the computer address of a sender. Tomlinson was looking for a symbol to separate the receiver’s user name and the address of his host computer. When he looked at his Teletype, he saw a few punctuation marks available and chose @ since it had the connotation of ‘at’ among accountants, and did not occur in software programs in some other connotation.

An idea some Arpanet pioneers like Larry Roberts were promoting in those days to justify funding the project was that it would be a resource sharing project. That is, if your ARPA computer is overloaded with work and there is another ARPA computer across the country, which has some free time, then you could connect to that computer through the Arpanet and use it. Though it sounded reasonable, it never really worked that way. Computer networks, in fact, came to be increasingly used for communication of both the professional and personal kind. Today, computer-to-computer communication through email and chat has become one of the ‘killer apps’—an industry term for an application that makes a technology hugely popular and hence provides for its sustenance—for the Internet.

LOCAL AREA NETWORKS

The Arpanet matured during the ’70s. Bob Taylor who had left Arpa in 1969 had started the computing research division at brand new Xerox Parc, at Palo Alto, California. His inspiration remained Licklider’s dream of interactive computing. At PARC it evolved into an epoch-making project of personal computing that led to the development of mouse, icons and graphical user interfaces, windows, laser printer, desktop computer and so on, which we have discussed in the chapter on personal computing. Parc scientists were also the first users of their own technology. Altos was developed at PARC as a desktop machine for its engineers.

For Taylor, connecting these desktop computers in his lab was but a natural thing. So he assigned Bob Metcalfe, a brilliant engineer, who had earlier worked with Abhay Bhushan at MIT on the Arpanet, to the task. Interestingly, both Metcalfe and Bhushan shared a dislike for academic snobbery. MIT, the Mecca of engineering, had turned down Bhushan’s work on Arpanet as being too low-brow for a PhD, while Harvard had turned down Metcalfe’s. “They probably wanted lots of equations and Greek symbols,” said Metcalfe once, sarcastically.

As a footnote, it is worth noting that, later, Harvard accepted Metcalfe’s analysis of the Alohanet experiment in Hawaii for a PhD ‘reluctantly’, according to Metcalfe. MIT, however, has made up for Harvard by putting Metcalfe on its governing board.

What was Alohanet? When Metcalfe started experimenting on Local Area Network (LAN) at PARC, he looked at Alohanet, which had already come into being in Hawaii, thanks to ARPA funding. Since Hawaii is a group of islands, the only way the computer at the University of Hawaii’s main campus could be connected to the terminals at other campuses on different islands was through a wireless network. It was appropriately called Alohanet, since ‘Aloha’ is the Hawaiian equivalent of ‘Hi’. Norman Abramson, a professor of communication engineering at the University of Hawaii, had designed it.

The Alohanet terminals talked through radio waves with an IMP, which then communicated with the main computer. It looked like a straightforward extension of Arpanet ideas. But there was a difference. In the Arpanet, leased telephone lines connected the IMPs, and each IMP could ‘see’ the traffic conditions and send its packets. In Alohanet, however, the terminals could not know in advance the packets that were floating in the Ether. So they would wait for the destination computer to acknowledge the receipt of the packet and if they did not get an acknowledgement then they would send the packet again, after waiting for a random amount of time. With a limited number of computers, the system worked well.

Metcalfe saw similarities between Alohanet and his local area network (LAN) inside Xerox PARC, in that each computer sent its packets into the void or ether of a coaxial cable and hoped that the packets reached the destination. If they did not get an acknowledgement then they would wait randomly a few microseconds and send the packets again. The only difference was that, while Alohanet was quite slow due to low bandwidth, Metcalfe could easily jack up the speed of his LAN to a few megabits per second. He named his network protocol and the architecture as Ethernet.

WHAT IS A NETWORK PROTOCOL?

A ‘communication protocol’ is a favourite word of networking engineers just as ‘algorithm’ is a favourite of computer scientists. Leaving the technical details aside, a protocol is actually a step-by-step approach to enable two computers “talk to each other” i.e. exchange data. We use protocols all the time in human communication, so we don’t notice it, but if two strangers met, then how would they start to converse? They would start by introducing themselves, finding a common language, agreeing on a level of communication—formal, informal, professional, personal, polite, polemical and so on, before exchanging information.

Computers do the same thing through different protocols. For example, what characterises Alohanet and Ethernet protocols is that packets are sent again after a random wait if they were lost due to data collision. We also do it so often. If two people start talking at once, then their information would ‘collide’ and not reach the other person—the ‘destination’. They would then wait politely for a random amount of time, for the other person to start talking and only if he or she does not, then they would. That is what computers do too, if connected by Ethernet.

When Xerox, Intel and DEC agreed to adopt Ethernet as a networking standard and made it public in 1980, Metcalfe saw an opportunity and started a company called 3COM (Computers, Communications, Compatibility) to supply Ethernet cards and other equipment. Within a few years 3COM had captured the office networking market even though other proprietary systems from Xerox and IBM were around. This was mainly because Ethernet had become an open standard and 3COM could network any set of office computers and not just those made by IBM or Xerox. Thereby hangs another tale of the fall of proprietary technology and the super success of open standards that give the option to the user to choose his components.

Today, using fibre optics, Ethernet can deliver lightning data speeds of a 100 Mbps. In fact, a new version of Ethernet called Gigabit Ethernet is fast becoming popular as a technology to deliver high-speed Internet to homes and offices.

Local Area Networks have changed work habits in offices across the globe like never before. They have led to file-sharing, smooth workflow and collaboration, besides speedy communication within companies and academic campuses.

THE PENTAGON AS THE CHESHIRE CAT

As Arpanet rose in popularity in the 70s, a clamour started from every university and research institution to be connected to Arpanet. Everybody wanted to be part of this new community of shared interests. However, not everyone in a Local Area Network could be given a separate Arpanet connection, so one needed to connect entire LANs to Arpanet. Here again there was a diversity of networks and protocols. So how would you build a network of networks (also called the Internet)? Largely, Robert Kahn and Vinton Cerf solved this problem by developing TCP (Transmission Control Protocol) and hence they are justly called the inventors of the Internet.

Regarding the motivation for the Internet, Cerf pointed out that one of them was definitely defence. “Arpanet was built for civilian computer research, but when we looked at connecting various networks we found that there were experiments going on packet satellite networks to network naval ships and the shore installations using satellites. Then there were packet radio networks, where packets were sent wireless by mobile computers. Obviously, the army was interested, since it represented to them some battlefield conditions on the ground. In fact, today’s GPRS (General Packet Radio Service) or 2.5 G cell phone networks are based on the old packet radio. At one time, we put packet radios on air force planes, too, to see if strategic bombers could communicate with each other in the event of nuclear war. So the general problem was how to internetwork all these different networks, but the development of technology had obvious military interest,” says Cerf. “But even the highway system in the US was built with missile defence as a justification. It lead to a leap in the automobile industry, in housing, and changed the way we live and work besides transportation of goods, but I do not think any missile was ever moved over a highway,” adds he.

Coming back to the problem of building a network of networks, Cerf says, “We had the Network Control Protocol to run the Arpanet, Steve Crocker had led that work. But the problem was that NCP assumed that packets would not be lost, which was okay to an extent within Arpanet, but Bob Kahn and I could not assume the same in the Internet. Here, each network was independent and there was no guarantee that packets would not be lost, so we needed recovery at the edge of the net. When we first wrote the paper in 1973-74 we had a single protocol called TCP. Routers could take packets that were encapsulated in the outer networks and carry it through the Internet. It took four iterations from 1974-78 to arrive at what we have today. We split the TCP program into two. One part worried about just carrying packets through the multiple networks while the other part worried about restoring the sequencing and looking at packet losses. The first was called IP (Internet Protocol) and the other, which looked at reliability, was called TCP. Therefore, we called the protocol suite, TCP/IP. Interestingly, one of the motivations for separating the two was to carry speech over the Internet,” reveals Cerf.

TCP allowed different networks to get connected to the Arpanet. The IMPs were now divided into three boxes: one dealt with packets going in and out of the LAN, the other dealt with packets going in and out of the Arpanet and a third, called a gateway, passed packets from one IMP to the other while correctly translating them into the right protocols.

Meanwhile, in 1971, an undergraduate student at IIT Bombay, Yogen Dalal, was frustrated by the interminable wait to get his programs executed by the old Russian computer. Thanks to encouragement from a faculty member, J R Isaac, who was then head of the computer centre, Dalal started a BTech project on building a remote terminal for the mainframe. “Like all undergraduate projects, this also did not work,” laughs Dalal, recalling those days. But when he went to Stanford for his MS and PhD and saw cutting-edge work being done in networking by Cerf & Co., he naturally got drawn into it.

As a result, Vinton Cerf, Yogen Dalal and another graduate student, Carl Sunshine, wrote the first paper setting forth the standards for an improved version of TCP/IP, in 1974, which became the standard for the Internet. “Yogen did some fundamental work on TCP/IP. I remember, during 1974, when we were trying to sort out various problems of the protocol, we would come to some conclusions at the end of the day and Yogen would go home and come back in the morning with counter examples. He was always blowing up our ideas to make this work,” recalls Cerf.

“They were the most exciting years of my life,” says Yogen Dalal, who after a successful career at Xerox PARC and Apple, is a respected venture capitalist in Silicon Valley. Recently he was listed as among the top fifty venture capitalists in the world.

THE TANGLED WEB

In the eighties, networking expanded further among academics, and the Internet evolved as a communication medium with all the trappings of a counter culture.

Two things changed the Internet, meant for the specialist to the Internet that millions could relate to. One was the development of the World Wide Web and the other was a small program called the Browser that allowed you to navigate in this web and read the web pages.

The web is made up of host computers connected to the Internet containing a program called a Web Server. The Web Server is a piece of computer software that can respond to a browser’s request for a page and deliver the page to the Web browser through the Internet. You can think of a Web server as an apartment complex with each apartment housing someone’s Web page. In order to store your page in the complex, you need to pay rent on the space. Pages that live in this complex can be displayed to and viewed by anyone all over the world. The host computer is your landlord and your rent is called your hosting charge. Every day, there are millions of Web servers delivering pages to the browsers of tens of millions of people through the network we call the Internet.

The host computers connected to the Net, called Internet servers, are given a certain address. The partitions within the server hosting separate documents belonging to different owners are called Websites. Each website in turn is also given an address—Universal Resource Locator (URL). These addresses are assigned by an independent agency. It acts in a manner similar to that of the registrar of newspapers and periodicals or the registrar of trade marks, who allow you to use a unique name for your publication or product if others are not using it.

When you type in the address or URL of a website in the space for the address in your browser, the program sends packets requesting to see the website. The welcome page of the website is called the home page. The home page carries an index of other pages, which are part of the same website and residing in the same server. When you click with your mouse on one of them, the browser recognises your desire to see the new document and sends a request to the new address, based on the hyperlink. Thus, the browser helps you navigate the Web or surf the information waves of the Web—which is also called Cyberspace, to differentiate from real navigation in real space.

The web pages carry composing or formatting instructions in a computer language known as Hyper Text Markup Language (HTML). The browser reads these instructions or tags when it displays the web page on your screen. It is important to note that the page, on the Internet, does not actually look the way it does on your screen. It is a text file with embedded HTML tags giving instructions like ‘this line should be bold’, ‘that line should be in italics’, ‘this heading should be in this colour and font,’ ‘here you should place a particular picture’ and so on. When you ask for that page, the browser brings it from the Internet web servers and displays it according to the coded instructions. A web browser is a computer program in your computer that has a communication function and a display function. When you ask it to go to an Internet address and get a particular page, it will send a message through the Internet to that server and get the file and then, interpreting the coded HTML instructions in that page, compose the page and display it to you.

An important feature of the web pages is that they carry hyperlinks. Such text (with embedded hyperlinks) is called Hyper Text, which is basically text within text. For example, in the above paragraphs, there are words like ‘HTML’, ‘World Wide Web’ and ‘Browser’. Now if these words are hyperlinked and you want to know more about them, then I need not give the information right here, but provide a link to a separate document to explain each of these words. So, only if you want to know more about them, would you go that deep.

In case you do want to know more about the Web and you click on it, then a new document that appears might explain what the Web is and how it was invented by Tim Berners-Lee, a particle physicist, when he was at CERN, the European Centre for Nuclear Research at Geneva. Now if you wanted to know more about Tim Berners-Lee or CERN then you could click on those words with your mouse and a small program would hyperlink the words to other documents containing details about Lee or CERN and so on.

Thus, starting with one page, you might ‘crawl’ to different documents in different servers over the Net depending on where the hyperlinks are pointing. This crawling and connectedness of documents through hyperlinks seems like a spider crawling over its web and there lies the origin of the term ‘World Wide Web.’

STORY WITHIN A STORY

For a literary person, the hyperlinked text looks similar to what writers call non-linear text. A linear text has a plot and a beginning, a middle and an end. It has a certain chronology and structure. But a nonlinear text need not have a beginning, middle and an end in the normal sense. It need not be chronological. It can have flashbacks and flash-forwards and so on.
If you were familiar with Indian epics then you would understand hyperlinked text right away. After all, Mahabharat,1 Ramayana,2 Kathasaritsagar,3 Panchatantra,4 Vikram and Betal’s5 stories have nonlinearities built into them. Every story has a sub-story. Sometimes there

_______________________________________________________________
1India’s greatest epic, based on ancient Sanskrit verse, of sibling rivalry.

2Ancient Indian epic—The Story of Rama.

3Ancient collection of stories.

4Anonymous collection of ancient animal fables in Sanskrit.

5A collection of 25 stories where a series of riddles are posed to king Vikram by Betal, a spirit.
_____________________________________

are storytellers as characters within stories, who then tell other stories, and so on. At times you can lose the thread because, unlike Hyper Text and hyperlinks—where the reader can exercise his choice to follow a hyperlink or not—the sub-stories in our epics drag you there anyway!

Earlier, you could get only text documents on the Net. With HTML pages, one could now get text with pictures or animations or even some music clips or video clips and so on. The documents on the Net became so much livelier, while the hyperlinks embedded within the page took you to different servers—host computers on the Internet acting as repositories of documents.

It is as if you open one book in a library and it offers you the chance to browse through the whole library of books, CDs and videos! By the way, the reference to the Web as a magical library is not fortuitous. This idea of a hyperlinked electronic library was essentially visualised in the 1940s by Vannevar Bush at MIT, which he had called Memex.

Incidentally, Tim Berners-Lee was actually trying to solve the problem of documentation and knowledge management in CERN. He was grappling with the problem of how to create a database of knowledge so that the experience of the past could be distilled in a complex organisation. It would also allow different groups in a large organisation to share their knowledge resources. That is why his proposal to his boss to create a hyperlinked web of knowledge within CERN, written in 1989-90, was called: ‘Information Management: A Proposal’. Luckily, his boss is supposed to have written two famous words, “Why not?” on his proposal. Lee saw that the concept could be generalised to the Internet. The Internet community quickly grasped it, and we saw the birth of the Internet as we know it today. A new era had begun.

Lee himself developed a program, that looked like a word processor and had hyperlinks as underlined words. He called it a browser. The browser had two functions: a communication function which used Hyper Text Transfer Protocol (HTTP) to communicate with servers, and a presentation function. As more and more servers capable of using HTTP were set up, the Web grew.

Soon more browsers started appearing. The one written by a graduate student at the University of Illinois, Marc Andreessen, became very popular for its high quality and free downloading. It was called Mosaic. Soon, Andreessen left the university, teamed up with Jim Clark, founder of Silicon Graphics, and floated a new company called Netscape Communications. Its Netscape Navigator created a storm and the company started the Internet mania on the stock market when it went public, attracting billions of dollars in valuation even though it was not making any profit!

Meanwhile, Tim Berners-Lee did not make a cent from his path breaking work since he refused to patent it. He continues to look at the development of the next generation of the Internet as a non-profit service to society and heads a research group, W3C, at MIT, which has become a standards-setting consortium for the Web.

With the enormous increase in the number of servers connected to the net carrying millions of documents, the need also arose of efficiently searching them. There were programs to search databases and documents. But how do you search the whole Web? Thus, programs were written to collect a database of key words in Internet documents and they are known as search engines. They list the results of such searches in the order of frequency of occurrence of the keywords in different documents. Thus, if I am looking for a document written by ‘Tim Berners-Lee’ on the Web, then I type the words ‘Tim Berners-Lee’ in the search engine and ask it to search the web for it. Within a few seconds, I get a list of documents on the web with the words Tim Berners-Lee.’ They could have been written by Lee or about Lee. HTML documents carry keywords used in the document, called meta-tags. Initially, it was enough to search within the
Meta-tags, but now powerful search engines have been devised which search the entire document. They make a list of words which are really important, based on the frequency of occurrence and the place where they appear. That is, do they occur in the title of the document or in subheadings or elsewhere? They then assign different levels of importance to all these factors, which is a process called ‘weighting’. Based on the sum of weights, they rank different pages and then display the search results.

LIBRARIAN FOR THE INTERNET

Before the Web became the most visible part of the Internet, there were search engines in place to help people find information on the Net. Programs with names like ‘gopher’ (sounds like ‘Go For’) and ‘Archie’ kept indexes of files stored on servers connected to the Internet and dramatically reduced the amount of time required to find programs and documents.

A web search engine employs special autonomic software called spiders to build lists of words found on Web sites. When a spider is building its lists, the process is called Web crawling. In order to build and maintain a useful list of words, a search engine’s spiders have to look at many pages.

How does a spider crawl over the Web? The usual starting points are lists of heavily used servers and very popular pages. The spider will begin with a popular site, indexing the words on its pages and following every link found within the site. In this way, the spidering system quickly begins to travel, spreading out across the most widely used portions of the Web.

In the late nineties, a new type of search engine was launched by two graduate students in Stanford, viz. Larry Page and Sergey Brin called Google. It goes beyond keyword searches and looks for ‘connectedness’ of the document and has become the most popular search engine at the moment.

Rajeev Motwani, a professor of computer science at Stanford, encouraged these students by skunking away research funds to them to buy new servers for their project. He explains, “Let us say that you wanted information on ‘bread yeast’ and put those two words in Google. Then it not only sees which documents have these as words mentioned but also whether these documents are linked to other documents. An important page for ‘bread yeast’ must have all other pages on the web dealing in any way with ‘bread yeast’ also linking to it. In our example, there may be a Bakers’ Association of America, which is hyperlinked by most documents containing ‘bread yeast’, implying that most people involved with ‘bread’ and ‘yeast’ think that the Bakers Association’s web site is an important source of information. So Google will then rate that website very high and put it on top of its list. Thus irrelevant documents which just mention ‘bread’ and ‘yeast’ will not be given any priority in the results.”

Motwani who is a winner of the prestigious Gödel Prize for his contributions to computer science, is a technical advisor to Google and is watching its growth with enthusiasm. Google today boasts of being the fastest search engine and even lists the time taken by it (usually a fraction of a second) to scan billions of documents to provide you with results. The strange name Google came about because ‘googol’ is a term for a very large number (10 to the power 100 coined by a teenager in America in 1940s) and the founders of Google, who wanted to express the power of their new search engine, misspelt it!

By the way, you might have noticed that the job of the search engine is nothing more than what a humble librarian does all the time and more intelligently! However, the automation in the software comes to our rescue in coping with the exponential rise in information.

NEW MEDIA

With the user-friendliness of the Internet taking a quantum leap with the appearance of the Web, commercial interests took notice of its potential. What was until then a communication medium for engineers now appeared accessible to ordinary souls that were looking for information and communication. That is what every publication does and hence the Web was looked upon as a new publishing medium.

Now, if there is a new medium of information and communication and if millions of people are ‘reading’ it, then will advertising be far? News services and all kinds of publications started using the Web to disseminate news like a free wire service.

An ordinary person too could put up a personal web page at a modest cost or for free using some hosting services containing his personal information or writings. Thus, if Desktop Publishing created thousands of publications, the Web led to millions of publishers!

As a medium of communication, corporations and organisations all over the world have adopted Web technology. A private network based on web technology is called an Intranet, as opposed to the public Internet. Thus, besides the local area networks, a corporate CEO has a new way to communicate with his staff. But progressive corporations are using the reverse flow of communication through their Intranets—from the bottom up as well, to break rigid bureaucracies and ‘proper channels’ to rejuvenate themselves.

The Web, however, was more powerful than all old media. It was interactive. The reader can not only read what is presented but can send requests for more information about the goods and services advertised.

Meanwhile, developments in software led to the possibility of forms being filled and sent by users on the Web. Web pages became dynamic, one could see ‘buttons’ and even a click was heard when you clicked on the button! Dynamic HTML and then Java enriched the content of Web pages. Java was a new language, developed by Sun Microsystems, that could work on any operating system. It was ideally suited for the Web, since no one knew the variety of hardware and operating systems inside the millions of computers that were connected to the Web. Interestingly, when I asked Raj Parekh, who worked as VP Engineering and CTO of Sun, about how the name Java was picked, he said, “We chose Java because Java beans yield strong coffee, which is popular among engineers”. Today the steaming coffee cup—the symbol of Java—is well known to software engineers.

Developments in software now led to encryption of information sent by the user to the web server. This provided the possibility of actually transacting business on the web. After all, any commercial transaction needs security.

Computers containing the personal financial information of the user, like those of banks and credit card companies, could now be connected to the Web with appropriate security like passwords and encryption. At one stroke, the user’s request for information regarding a product could be turned into a ‘buy order’ his bank or credit card company being duly informed of the same. Thus a wave of a new type of commerce based on the Web came into being, called e-commerce. The computers that interfaced the Web with the bank’s database came to be known as payment gateways.

New enterprises that facilitated commerce on the Net were called dotcoms. Some of them actually sold goods and services on the Net, while others only supplied information. The information could also be a commodity to be bought or distributed freely. For example, if you wanted to know the details of patents filed in a particular area of technology then the person who had digitised the information, classified it properly and made it available on the web might charge you for providing that information, whereas a news provider may not charge any money from visitors to his website and might collect the same from advertisers instead.

COME, SET UP A SHOP WINDOW

There were companies like Amazon.com, which started selling books on the Internet. This form of commerce still needed warehouses and shipping and delivery of goods once ordered but they saved the cost of the real estate involved in setting up a retail store. It also made the ‘store front’ available to anybody on the Net, no matter where he was sitting.

E-commerce soon spread to a whole lot of retail selling, be it airline or railway tickets or hotel rooms or even financial services. Thus, one could sit at home and book a ticket or a hotel across the world or have access to his bank account. You could not only check your bank account but also now pay bills on the Web—your credit card bills, telephone, electricity and mobile bills, etc thereby minimising your physical visits to various counters in diverse offices.

With the web bringing the buyer and seller together directly, many transactions that had to go through intermediaries could now be done directly. For example, one could auction anything on the web as long as both the parties trusted each other’s ability to deliver. If you could auction your old PC or buy a used car, then you could put up your shares in a company to auction as well.
But that is what stock exchanges do. So computerised stock exchanges, like Nasdaq in the US and NSE in India, which had already brought in electronic trading, could now be linked to the web. By opening your accounts with certain brokers, you could directly trade on the Net, without asking your broker to do it for you.

In fact, Web market places called exchanges came into being to buy and sell goods and services for businesses and consumers. At one time, a lot of hype was created in the stock markets of the world that this form of trading would supersede all the old forms and a ‘new economy’ has come into being. Clearly, it was an idea whose time had not yet come. The Internet infrastructure was weak. There was a proliferation of web storefronts with ‘no ware houses and goods or systems of delivery’ in place. The consumers balked at this new form of catalogue marketing and even businesses clearly showed preferences for trusted vendors. But the web technologies have survived. Internet banking is growing and so is bill payment. Even corporations are linking their computer systems with those of their vendors and dealers in a Web-like network for collaboration and commerce. Services like booking railway or airline tickets or hotel rooms are being increasingly used. The web has also made it possible for owners of these services to auction a part of their capacity to optimise their occupancy rate.

Some entrepreneurs have gone ahead and created exchanges where one can look for a date as well!

Clearly, we are going to see more and more transactions shifting to the Internet, as governments, businesses and consumers all get caught up in this magical Web.

THEN THERE WAS HOTMAIL

It might sound like an Old Testament-style announcement, but that cannot be helped because the arrival and growth of email has changed communication forever.

In the early nineties, Internet email programs existed on a limited scale and one had to pay for them. Members of academia or people working in large corporations had email facility but those outside these circles could not adopt it unless they subscribed to commercial services like America On Line. Two young engineers, Sabeer Bhatia and Jack Smith thought there must be a way for providing a free email service for anybody who registers at their website. One could then access the mail by just visiting their website from anywhere. This idea of web-based mail, which was named Hotmail, immediately caught the fancy of millions of people. Microsoft acquired the company. Meanwhile, free Web-based email services have played a great role in popularising the new communication culture and, today, Hotmail would be one of the largest brands on the Internet.

Soon, ‘portals’ like Yahoo offered web mail services. A portal is a giant aggregator of information, and a catalogue of documents catering to varied interests. Thus, a Web surfer could go to one major portal and get most of the information he wanted through the hyperlinks provided there. Soon, portals provided all kinds of directory services like telephone numbers. As portals tried to be one-stop shops in terms of information, more and more directory services were added. Infospace, a company founded by Naveen Jain in Seattle, pioneered providing such directory services to websites and mobile phones and overwhelmingly dominates that market in the US.

BLIND MEN AND THE ELEPHANT

The Web has become many things to many people.

For people like me looking for information, it has become a library. “How about taking this further and building giant public libraries on the Internet,” asks Raj Reddy. Reddy, a veteran computer scientist involved in many pioneering projects in Artificial Intelligence in the ’60s and ’70s, is now involved in a fantastic initiative of scanning thousands of books and storing them digitally on the Net. It is called the Million Book Digital Library Project at Carnegie Mellon University. Reddy has been meeting with academics and government representatives in India and China to make this a reality. He is convinced that this labour-intensive job can best be done in India and China. Moreover, the two countries will also benefit by digitising some of their collections. A complex set of issues regarding Intellectual Property Rights, like copyright, are being sorted out. Reddy is hopeful that pragmatic solutions will be found for copyright issues.
Public libraries in themselves have existed for a long time and shown that many people sharing a book or a journal is in the interest of society.

When universities across the world and especially in India are starved of funds for buying new books and journals and housing them in decent libraries, what would happen if such a Universal Digital Library is built and made universally available? That is what N Balakrishnan, professor at Indian Institute of Science, Bangalore, said to himself. He then went to work with a dedicated team to lead the work of the Million Book Digital Library in India. Already, several institutions are collaborating in this project and over fifteen million pages of information have been digitized from books and journals and even palm leaf manuscripts.

If the governments intervene appropriately and the publishing industry cooperates, then there is no doubt that a Universal Digital Library can do wonders to democratise access to nowledge.

WHAT NEXT?

While Rajeev Motwani and his former students dream of taking over the world with Google, Tim Berners-Lee is evangelising the next generation of the Web, which is called the Semantic Web. In an article in Scientific American, May 2001, Lee explained, “Most of the Web’s content today
is designed for humans to read, not for computer programs to manipulate meaningfully. Computers can adeptly parse Web pages for layout and routine processing—here a header, there a link to another page—but in general, they have no reliable way to process the semantics. The Semantic Web will bring structure to the meaningful content of Web pages, creating an environment where software agents roaming from page to page can readily carry out sophisticated tasks for users. It is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation. The initial steps in weaving the Semantic Web into the structure of the existing Web are already under way. In the near future, these developments will usher in significant new functionality as machines are enabled to process and ‘understand’ the data that they merely display at present.

“Information varies along many axes. One of these is the difference between information produced primarily for human consumption and that produced mainly for machines. At one end of the scale, we have everything from the five-second TV commercial to poetry. At the other end, we have databases, programs and sensor output. To date, the Web has developed most rapidly as a medium of documents for people rather than for data and information that can be processed automatically. The Semantic Web aims to make up for this. The Semantic Web will enable machines to comprehend semantic documents and data, not human speech and writings,” he adds.

It is easy to be skeptical about a semantic web, as it smells of Artificial Intelligence, a project that proved too ambitious. Yet Lee is very hopeful of results flowing in slowly. He recognises that the Semantic Web will need a massive effort and is trying to win over more and more people to work on the challenge.

To sum up the attributes of the Net developed so far, we can say it has become a market-place, a library and a communication medium.

A MANY-SPLENDOURED THING

The dual communication properties of the Web—inexpensive broadcasting and interactivity—will lead to new applications:

• It will help intellectual workers work from homes, etc and not necessarily from offices, thereby reducing the daily commute in large cities. This is being called telecommuting.

• It can act as a great medium to bring the universities closer to people and encourage distance learning. Today, a distance learning experiment is in progress in the Adivasi villages of Jhabua district in Madhya Pradesh, using ISRO satellite infrastructure and a telephone link between students and the teacher for asking questions. Several such ‘tele-classrooms’ could come alive, for everyone, on their PC screens. Imagine students in remote corners of India being able to see streamed videos of the Feynman Lectures in Physics or a demonstration of a rare and expensive experiment or a surgical operation! With improved infrastructure, the ‘return path’ between students and teachers can be improved. Then a student need not go to an IIM or IIT to take a course, he could do it remotely and even send questions to the professor. It will increase the reach of higher education and training many-fold..

• It can act as a delivery medium for video on demand and music on demand, thus bringing in entertainment functions. Basavraj Pawate, a chip design expert at Texas Instruments who is now working on VOIP, believes that CD-quality sound can be delivered through the Net, once some problems of Internet infrastructure are solved.

• When video conferencing on the Net becomes affordable, all kinds of consultations with doctors, lawyers, accountants and so on can become possible at a distance. For example, today, a heart specialist in Bangalore is linked to villages in Orissa and Northeastern India. He is able to talk to the patients and local doctors, receive echocardiograms of patients and give his expert opinion. Similarly, the Online Telemedicine Research Institute of Ahmedabad could provide important support during Kumbh Mela† and in the aftermath of earthquakes like that in Bhuj.
However, all these experiments have been made possible using VSATS, thanks to ISRO offering the satellite channel for communication. As Internet infrastructure spreads far and wide, the same could be done between any doctor and any patient. Today, premier medical research institutions like All India Institute of Medical Sciences, Delhi, the Post Graduate Institute of Medical Sciences and Research, Chandigarh, and the Sanjay Gandhi Institute of Medical Sciences, Lucknow are being connected with a broadband network for collaborative medical research and consultation. However, if a broadband network comes into being in all the cities and semi- urban centres, to begin with, then medical resources concentrated in the big cities of India can be made available to patients in semi-urban and even rural areas. Thus an expert’s time and knowledge can be optimally shared with many people.

• Indian farmers have demonstrated their hunger for knowledge and new agri-technology in the last thirty years. The Net can be used to distribute agronomical information, consultation regarding pests and plant diseases, meteorological information, access to land records and even the latest prices in the agricultural commodity markets. Though India has a large population involved in agriculture, the productivity of Indian agriculture is one of the lowest in the world. The Net can thus be used to increase agricultural productivity by enabling a massive expansion of the extension programme of agricultural universities and research institutions. Already, several experiments are going on in this direction. However, large-scale deployment of rural Internet kiosks, akin to the ubiquitous STD booths, awaits large-scale rural connectivity.

• It is understood that Voice Over Internet, that is, packetised voice, might become an important form of normal telephony. International telephony between India and US has come down drastically in cost due to VOIP. When this technology is applied within India, a cheaper and more affordable telephony can be provided, which will work wonders with the Indian economy. A large number of the poor in the cities, small towns and villages will be brought into the telecom net. While the incumbent state- owned telephone company, BSNL, might take some time to adopt new IPbased (Internet Protocol) technologies for voice due to legacy issues in its network, the new private networks have an opportunity to build the most modern IP-based networks in the world.

• ‘Disintermediation’, which means removing the ‘brokers’ between two parties, is a major economic and social fallout of Internet technology. It also extends to the sphere of governance. Thus the Net can remove the layers of India’s infamous opaque and corrupt bureaucracy and bring governance closer to citizens. Forms can be filled, taxes can be paid, notifications can be broadcast, citizens’ views can be polled on controversial issues, the workings of different government committees can be reported and bills of government utilities can be recovered on the Net. Overall governance can become more citizen-friendly and transparent.

The scenario I am sketching is still futuristic even in the most advanced economies of the world. Further work is going on in three directions. One is in scaling up the present Internet backbone infrastructure to carry terabytes of data. The other is building IP-based routers to make it an efficient convergent carrier. And the third is to bridge the digital divide.

THE NEXT-GENERATION NET

“Bandwidth will play the same role in the new economy as oil played in the industrial economy till today,” says Desh Deshpande, chairman, Sycamore Networks, a leader in Intelligent Optical Networking. He is part of the team that is working to create all-optical networks, optical switches and cross connects and ‘soft optics’—a combination of software and optical hardware that can provision gigabytes of bandwidth to any customer in a very short period of time. “It used to take forever and lots of money to provision bandwidth for customers in the old telephone company infrastructure, but today technology exists to do it in a day or two. We want to bring it down to few minutes so that we can have bandwidth on demand,” says Deshpande.

While there are several technologists like Desh Deshpande, Krish Bala, Rajeev Ramaswami, and Kumar Sivarajan who are working on improving data transport, Pradeep Sindhu is concerned with IP routers. Until the mid-nineties, very simple machines were being used as routers, and they received the packets, looked at them, sniffed them, and sent them off to the next router, but in the process wasted some time. They worked well at the enterprise level but when it came to several gigabytes of data in the core of the network, they could not handle it. Pradeep Sindhu was surprised at their primitive nature and proposed that computing had advanced enough by the mid-nineties to design faster and more efficient IP routers, and he built them for the core of the network. The company that Sindhu founded, Juniper Networks, has now come to be identified with high-end routers.

Sindhu has become an IP evangelist. “In 1996, when I asked myself how an exponential phenomenon like the Internet could be facilitated, I saw that the only protocol that could do it is IP. Since it is a connectionless protocol, it is reliable and easily scalable. The elements that were missing were IP routers. When I looked at the existing routers built by others, I was surprised at their primitive nature. That is when I realised that there was a great opportunity to build IP routers from the ground up using all the software and hardware techniques I had learnt at Xerox PARC (Palo Alto Research Center). I called Vinod Khosla, since I had done some work with Sun, and he had investments in networking. He gave me an hour. I spoke to him about the macro scene and told him that if we design routers from first principles, we could do fifty times better than what was available. He asked some questions and said he would think about it. He called back two weeks later and said let us do something together,” reveals Sindhu.

“When Pradeep came to me, he had no business experience. My view was: ‘I like the person and I like the way he thinks’. I asked him to sit next to somebody who was trying to build an Internet network for three weeks and asked him to understand what the problems are. He is such a good guy that he was able to learn quickly what the problems were. Helping a brilliant thinker like Pradeep and guiding him gives me great satisfaction. This is one guy who has really changed the Internet. The difference he has made is fabulous,” says Vinod Khosla.

Khosla was one of the founders of Sun Microsystems in 1982 and has become a passionate backer of ideas to build the next generation Internet infrastructure. He works today as a partner in Kleiner Perkins Caulfield Byer, a highly respected venture capitalist firm in the Silicon Valley, and has been named by several international business magazines as one of the top VCs in the world for picking and backing a large number of good ideas.

BRINGING THE BYTES HOME

The second direction in which furious work is going on is to actually bring the bytes home. What is happening today is that a wide and fast Internet super highway is being built, but people still have to reach it through slow and bumpy bullock cart roads. This is called the problem of the ‘edge’ of the Net or the problem of ‘last mile connectivity’.

While the ‘core’ of the network is being built with optical networking and fibre optics, several technologies are being tried out to reach homes and offices. One of them, laying fibre to the home itself, is expensive and can work in corporate offices in large cities. The second is to bring fibre as close to homes and offices as possible and then use multiple technologies to reach the destination. This is called fibre to the kerb. The options then available for last mile are:

• Using the existing copper wire cables of telephones and converting them to Digital Subscriber Lines (DSL). This will utilize the existing assets, but it works only for distances of about a kilometre, depending on the quality of copper connection.

• To use the coaxial cable infrastructure of cable TV. This requires a sophisticated cable network and not the one our neighbourhood cablewallah (cable service provider) has strung up.

• Use fixed Wireless in Local Loop. This is different from the limited-mobility services that are being offered, which are actually fully mobile technologies whose range is limited due to regulatory issues. Such mobile technologies are still not able to deliver bandwidths that can be called broadband.

However, fixed wireless technologies exist that can deliver megabytes of data. One of them is called Gigabit Wireless. According to Paul Raj at Stanford University, one of the pioneers in this technology, it can deliver several megabytes of bandwidth using closely spaced multiple antenna using a technique he developed called space time coding and modulation.

Another fixed wireless technology that is fast becoming popular as a way of building office networks without cables is Wireless LAN. This is being experimented with in neighbourhoods, airports, hotels, conference centres, exhibitions, etc as a way of delivering fast Internet service of up to a few megabytes and at least a hundred kilobytes. All one needs is what is called a Wi-Fi card in your laptop or desktop computer to hook onto the Internet in these environments.
The road outside my apartment has been dug up six times and has been in a permanent state of disrepair for the last two years. I have tried explaining that this is all for a bright future of broadband connectivity. Initially, they thought I was talking about a new type of cable TV and paid some attention to what I was saying, but their patience is wearing thin as roads continue to portray a Martian or lunar landscape and there is no sign of any kind of bandwidth, broad or otherwise.

But being an incorrigible optimist, I am ready to wait for new telecom networks to roll out and old ones to modernise, so that we will see a sea change in telecom and Internet connectivity in India in a few years. In the process, I have learnt that if technology forecasting is hazardous then forecasting the completion of projects in India is even more so!

WHAT ABOUT VILLAGES?

Vinod Khosla, who does not mind espousing unpopular views if he is convinced they are right, says, “I suggest that we take our limited resources, and put them to the highest possible economic use. If you believe in the entrepreneurial model as I do, I believe that five per cent of the people empowered by the right tools can pull the remaining ninety-five per cent of the country along in a very radical way. The five per cent is not the richest or the most privileged or the people who can afford it the most; it is the people who can use it best. There are 500,000 villages in India. Trying to empower them with telecommunication is a bad idea. It’s uneconomic. What we are getting is very few resources in the rural areas despite years of trying and good intent. There are sprawling cities, climbing to ten or twenty million people. And the villages lack power, communications, infrastructure, education, and health care. Talking about rural telephony to a village of 100 families is not reasonable. If we drew 5,000 circles, each 40 km in radius on the map of India, then we could cover 100 villages in each circle or about 100,000 in all. I can see a few thousand people effectively using all these technologies”.

KNITTING THE VILLAGES INTO THE NET

However, Ashok Jhunjhunwala at IIT Madras disagrees. He believes that while it is a good idea to provide broadband connectivity to 5,000 towns, the surrounding villages can and should be provided with telephony and even intermediate-rate bandwidth. However, is there enough money for it and will there be an economic return? “Yes. Fixed wireless like the corDect developed at IIT Madras can do the job inexpensively”, he asserts. “The point is to think out of the box and not follow blindly models developed elsewhere”, he says. “Information is power and Internet is the biggest and cheapest source of information today. Thus, providing connectivity to rural India is a matter of deep empowerment”, argues Jhunjhunwala.

What will be the cost of such a project? “Before we start discussing the costs, first let us agree on its necessity. Lack of access to the Internet is going to create strong divides within India. It is imperative that India acquire at least 200 million telephone and Internet connections at the earliest”, adds he.

He points out that, today, telephony costs about Rs 30,000 per line. In such a situation, for an economically viable service to be rolled out by any investor, the subscriber should be willing to pay about Rs 1,000 per month. Jhunjhunwala estimates that only 2 per cent of Indian households will then be able to afford it. If we can, however, bring down the cost to Rs 10,000 per line, then 50 per cent of Indian households, which is approximately 200 million lines, become economically viable. “Breaking the Rs 10,000-per-line barrier will lead to a disruptive technology in India”, says Jhunjhunwala.

LEARNING FROM THE CABLEWALLAH

Jhunjhunwala and his colleagues at Tenet (Telecom and Networking) Group at IIT Madras are working on this goal, but he feels that a much wider national effort is necessary to make this a reality. However, they have already developed a technology called corDect, which costs about Rs 8,000- Rs 12,000 per line and, more importantly, seamlessly provides, telephony and 30-70 Kbps Internet bandwidth. This is enough to start with, for rural networks. We can bring more advanced technology and greater bandwidth later. “We have a lot to learn from the neighbourhood cablewallah. From zero in 1992, the number of cable TV connections today is believed to have grown to over fifty million. What has enabled this? The low cost of a cable TV connection and the falling real value of TV. As a result, Cable TV has been made affordable to over sixty per cent of Indian households” says he.

“The second reason for this rapid growth”, continues Jhunjhunwala, “is the nature of the organisation that delivers this service. Cable TV operators are small entrepreneurs. They put up a dish antenna and string cables on poles and trees to provide service in a radius of 1 km. The operator goes to each house to sell the service and collects the bill every month. He is available even on a Sunday evening to attend to customer complaints. This level of accountability has resulted in less-trained people providing better service, using a far more complex technology than that used by better-trained technicians handling relatively simple telephone wiring. However, what is even more important is that such a small-scale entrepreneur incurs labour cost several times lower than that in the organised sector. Such lower costs have been passed on to subscribers, making cable TV affordable”.

BUILDING BRIDGES WITH BANDWIDTH

Tenet has worked closely with some new companies started by the alumni of IIT Madras, and the technology has been demonstrated in various parts of India and abroad. “It is possible to provide telephones as well as medium-rate Internet connections in all villages of India in about two years time with modest investment. We have orders of over two million lines of corDECT base stations and exchanges and about one million lines of subscriber units. Several companies like BSNL, MTNL, Reliance, Tata Teleservices, HFCL Info (Punjab) and Shyam Telelink (Rajasthan) have placed these orders. I think there is a potential of up to fifty million lines in India during the next four years. As for rural connectivity, n-Logue, a company promoted by the Tenet Group of IIT Madras, is already deploying Internet kiosks in villages in fifteen districts. The cost of a kiosk is about Rs 50,000. They are partnering with a local entrepreneur just as STD booths and cable TV did earlier and are providing rural telephony by tying up with Tata Indicom in Maharashtra and Tata Teleservices in Tamil Nadu. The local entrepreneurs can come up, villages can get telephony and the basic service operators can fulfill their rural telephony quota. It is a win-win solution”, he says.

So there is reason for my optimism. More and more people in the government and private sector see the change that communication and Internet infrastructure can do to India, and the business opportunities available in it. In some places, the highway may be wider and at some others, narrower, depending on economic viability or present availability of capital, but one thing is sure: connectivity is coming to India in an unprecedented way. When this infrastructure is utilized innovatively, this nation of a billion people might see major changes in the quality of life taking place by 2020. Not only for the well off, but also for the hundreds of millions, if not the billion-plus others.

Bandwidth might bridge the real divides in India.

FURTHER READING

1. Where wizards stay up late-The origins of the Internet—Katie Hafner & Matthew Lyon, Touchstone, 1998
2. The Dream Machine-J C.R. Licklider and the revolution that made computing personal—M. Mitchell Waldrop, Viking 2001
3. Information Management: A Proposal—Tim Berners-Lee, CERN, March 1989, May 1990
4. Paul Baran, Interviewed by Judy O’Neill for the Oral History Archives, Charles Babbage Institute, Centre for the History of Information Processing, University of Minnesota, Minneapolis.
5. Making Telecom and Internet work for us—Ashok Jhunjhunwala, Tenet Group, IIT Madras
6. “The Semantic Web”—Tim Berners-Lee, James Hendler and Ora Lassila, Scientific American, May 2001
7. “WiLL is not enough”—Shivanand Kanavi, Business India, Sep 6-19, 1999
8. “Log in for network nirvana”—Shivanand Kanavi, Business India, Feb 7-20, 2000
9. “Pradeep Sindhu: A spark from Parc”—Shivanand Kanavi, Business India, Jan 22-Feb 4, 2001
(http://reflections-shivanand.blogspot.com/2007/12/profile-pradeep-sindhu.html )
10. “India: Telemedicine’s Great New Frontier”—Guy Harris, IEEE Spectrum, April 2002