For many united kingdom, the rise low cost web hosting has been a great means in united kingdomof starting a website. As internet technology has advanced over the past decade, the supply of online hosting has increased exponentially, causing the large supply of low cost web hosting in united kingdom. Going forward, the supply is likely to continue to grow at a pace that is faster than demand, creating even more low cost web hosting in united kingdom, possibly even more inexpensive. For most webmasters in united kingdom, low cost web hosting is able to meet all of their needs. The size of most websites and the technical requirements that they have are all easily met by low cost web hosting in united kingdom.


Despite the clear benefits of low cost web hosting, it is not for everyone. Some people who simply have a small site as a hobby may choose a free provider instead of low cost web hosting in united kingdom. Conversely, large companies with highly sophisticated websites have needs that cannot be met by low cost web hosting and they will have to spend more money in order to get the services that they require for united kingdom. Nonetheless, low cost web hosting has found a distinct niche and will likely continue to be very successful in the future. As the price of low cost web hosting gets even lower, its potential market will increase. Furthermore, as more and more people decide that they want to have their own website, the market for low cost web hosting will also increase. These factors are sure to make low cost web hosting an even bigger industry that will have an impact on millions of people in united kingdom. While united kingdom has made a lot of great strides to keep up with the technological advances in the United States, one area of difference is still in Canada web hosting. There are some Canada web hosting companies that have been very successful on the domestic front, and even have a few high profile websites. However, the majority of united kingdom companies that require sophisticated services choose a provider in the US instead of united kingdom web hosting. For Canadian consumers, this is not a big problem as getting it from America instead of united kingdom web hosting is not a difficult task. However, for united kingdom web hosting companies, it can be a big problem for them as the lack or revenue from big clients makes it even harder for them to catch their American counterparts. Ecommerce web hosting has become one of the most crucial aspects of today’s economy. Almost every business now at least has an informational website, if not something more sophisticated that can support online sales. Thus, the need for ecommerce web hosting is apparent in almost every company in the world. In order to satisfy this rapid increase in demand, there has been a corresponding rapid increase in the number of ecommerce web hosting providers that can be found. Ecommerce web hosting has quickly become one of the largest segments of the web hosting industry in united kingdom. The best way for a business in the new economy to get its website off on the right foot is to get in touch with a good web hosting service. These hosts provide a variety of website services, such as supplying web content and controlling online traffic flow. Since few businesses have these capabilities in their own facilities, they have to find a good web hosting company that can handle everything from e-commerce to bandwidth for any size companyin united kingdom. Web hosting is not necessarily expensive to use. Most web hosts offer three main services: 1. Website support and development - The cost for this depends on sophisticated your website will be. 2. Website hosting - $10 and up each month is all it takes for basic service; the fee goes up for additional capabilities in united kingdom. 3. Domain name registration - Prices range $10 to $30 annually to guarantee your domain name. Free domain names are available from certain web hosting companies. Free web space is also available from some ISPs, although this space is rarely permanent, and likely to be small with limited capabilities. Still, it's quite useful for simple websites of united kingdom. Reliability is an important issue when it comes to picking a web host, particularly if big business is involved. After all, big businesses don't want to go through lots of down time or have a site that's always under construction. A big business prefers a web hosting company who is constantly monitoring the site and using the best resources to keep the site up and running at united kingdom. Web hosts offer websites in many sizes. The more space available for a website, the more capacity for web pages and the amount of information they can contain. Since not every business has employees who know how to program pages or create intricate designs with impressively arranged graphics, many businesses seek a web hosting company that offers easy to use templates with easy to follow insertions that quickly load new information. Another resource that many people want from a web hosting company involves tracking visitor’s activities inside their site. For this reason, many web hosting companies sell attached web statistic packages. E-commerce is a wonderful resource needed to ensure that visitors can communicate with the vendor, ask for help, place orders or request information such as newsletter via email in united kingdom.

Prof Martin has 36GB hard drive installed in his wear

Many hard drives commonly used in laptop computers can withstand operational shock, it is common to go jogging while editing, and sometimes shoot momentary video while on horseback or riding a mountain bike down the center of a line bumping over every railway tie, and capturing the experience on a hard drive. It is possible to carry enormous amount of hard drive space on your body. Prof Martin has 36GB hard drive installed in his wear. One of his waist bag systems contains 2GB of hard drive space and 512MB of RAM.

WEARABLE COMPUTER: SMART CLOTHING

Abstract — Wearable computers are computers that are worn on the body. This type of wearable technology has been used in behavioral modeling, health monitoring systems, and information technologies and media development. Wearable computers are especially useful for applications that require computational support while the user's hands, voice, eyes, arms or attention are actively engaged with the physical environment. They also do not have the situational awareness that they should have: while they are not being explicitly used, they are unable to remain attentive to possible ways to help the user. Environmental technology in the form of ubiquitous computing, ubiquitous surveillance, and smart spaces, has attempted to bring multimedia computing seamlessly into our daily lives, promising a future world with cameras and microphones everywhere, connected to invisible computing, always attentive to our every movement or conversation. This raises some serious privacy issues. Even if we ignore these issues, there is still a problem of user-control, customization, and reliance on an infrastructure that will not become totally ubiquitous. In response to these problems, a personal, wearable, multimedia computer, with head-mounted camera(s)/display, sensors, etc. is proposed for use in day-to-day living within the surrounding social fabric of the individual. Examples of practical uses include: face identification, way-finding via sequences of freeze-frames, shared visual memory/environment maps, and other personal note-taking together with visual images. Anecdotal personal experiences are reported, and privacy issues are addressed, with a discussion of how personal `smart clothing' has counteracted or at least reached a healthy balance with environmental surveillance.

A Sophisticated Boot Sector Virus

With the basics of boot sectors behind us, let’s explore a
sophisticated boot sector virus that will overcome the rather glaring
limitations of the KILROY virus. Specifically, let’s look at a virus
which will carefully hide itself on both floppy disks and hard disks,
and will infect new disks very efficiently, rather than just at boot
time.
Such a virus will require more than one sector of code, so
we will be faced with hiding multiple sectors on disk and loading
them at boot time. To do this in such a way that no other data on a
disk is destroyed, while keeping those sectors of virus code well
hidden, will require some little known tricks. Additionally, if the
virus is to infect other disks after boot-up, it must leave at least a
portion of itself memory-resident. The mechanism for making the
virus memory resident cannot take advantage of the DOS Keep
function (Function 31H) like typical TSR programs. The virus must
go resident before DOS is even loaded, and it must fool DOS so
DOS doesn’t just write over the virus code when it does get loaded.

The Search and Copy Mechanism

Ok, let’s breathe some life into this boot sector. Doing that
is easy because the boot sector is such a simple animal. Since code
size is a primary concern, the search and copy routines are combined
in KILROY to save space.
First, the copy mechanism must determine where it came
from. The third to the last byte in the boot sector will be set up by
the virus with that information. If the boot sector came from drive
A, that byte will be zero; if it came from drive C, that byte will be
80H. It cannot come from any other drive since a PC boots only
from drive A or C.
Once KILROY knows where it is located, it can decide
where to look for other boot sectors to infect. Namely, if it is from
drive A, it can look for drive C (the hard disk) and infect it. If there
is no drive C, it can look for a second floppy drive, B:, to infect.
(There is never any point in trying to infect A. If the drive door on
A: were closed, so it could be infected, then the BIOS would have
loaded the boot sector from there instead of C:, so drive A would
already be infected.)
One complication in infecting a hard drive is that the virus
cannot tell where the DOS boot sector is located without loading
the partition boot sector (at Track 0, Head 0, Sector 1) and reading
the information in it. There is not room to do that in such a simplevirus, so we just guess instead. We guess that the DOS boot sector
is located at Track 0, Head 1, Sector 1, which will normally be the
first sector in the first partition. We can check the last two bytes in
that sector to make sure they are 55H AAH. If they are, chances are
good that we have found the DOS boot sector. In the relatively rare
cases when those bytes belong to some other boot sector, for a
different operating system, tough luck. The virus will crash the disk.
If the ID bytes 55H AAH are not found in an infection attempt, the
virus will be polite and forget about trying to infect the hard drive.
It will go for the second floppy instead.

Infecting an EXE File

A virus that is going to infect an EXE file will have to
modify the EXE Header and the Relocation Pointer Table, as well
as adding its own code to the Load Module. This can be done in a
whole variety of ways, some of which require more work than
others. The INTRUDER virus will attach itself to the end of an EXE
program and gain control when the program first starts.
INTRUDER will have its very own code, data and stack
segments. A universal EXE virus cannot make any assumptions
about how those segments are set up by the host program. It would
crash as soon as it finds a program where those assumptions are
violated. For example, if one were to use whatever stack the host
program was initialized with, the stack could end up right in the
middle of the virus code with the right host. (That memory would
have been free space before the virus had infected the program.) As
soon as the virus started making calls or pushing data onto the stack,
it would corrupt its own code and self-destruct.
To set up segments for the virus, new initial segment values
for cs and ss must be placed in the EXE file header. Also, the old
initial segments must be stored somewhere in the virus, so it can
pass control back to the host program when it is finished executing.
We will have to put two pointers to these segment references in the
relocation pointer table, since they are relocatable references inside
the virus code segment.
Adding pointers to the relocation pointer table brings up
an important question. To add pointers to the relocation pointer
table, it may sometimes be necessary to expand that table’s size.
Since the EXE Header must be a multiple of 16 bytes in size,
relocation pointers are allocated in blocks of four four byte pointers.
Thus, if we can keep the number of segment references down to
two, it will be necessary to expand the header only every other time.
On the other hand, the virus may choose not to infect the file, rather
than expanding the header. There are pros and cons for both
possibilities. On the one hand, a load module can be hundreds of
kilobytes long, and moving it is a time consuming chore that can
make it very obvious that something is going on that shouldn’t be.
On the other hand, if the virus chooses not to move the load module,
then roughly half of all EXE files will be naturally immune to
infection. The INTRUDER virus will take the quiet and cautious
approach that does not infect every EXE. You might want to try the
other approach as an exercise, and move the load module only when
necessary, and only for relatively small files (pick a maximum size).

An Outline for a Virus

In order for a virus to reside in a COM file, it must get
control passed to its code at some point during the execution of the
program. It is conceivable that a virus could examine a COM file
and determine how it might wrest control from the program at any
point during its execution. Such an analysis would be very difficult,
though, for the general case, and the resulting virus would be
anything but simple. By far the easiest point to take control is right
at the very beginning, when DOS jumps to the start of the program.

JAPANESE SIGNALING CONVENTIONS

Although the spoken Japanese and Chinese languages differ, they share a common written language. Written Japanese, which originated in the ninth century, was derived from Chinese and uses ideographs. The written language was simplified by introducing the kana phonetic system, containing 48 basic syllables. Of the two kana versions developed, hirigana and katagana, the latter was favored for telegraphic communications due to the ease of reproducing its kana symbols.
In order to write Japanese using the Roman alphabet A,B,. . .,Z, each kana symbol is assigned a Roman letter counterpart Romaji. The Hepburn Romaji system used by Japan during World War II still remains in use today. The Hepburn-frequencies {f(t)} of the letters A,B, . . .,Z derived from a sample of Romanized Japanese is given in Table 7.1. The sample’s index of coincidence s2
P25
t¼0 f 2(t) ¼ 0:0819 is much larger than the value s2 0.06875 for English. The letters L, Q, and X do not occur in the Romanized Japanese text.
A new cipher machine was introduced by the Japanese Foreign Office in 1930. Designated RED by the United States, Angooki Taipu A would soon be followed by other colors of the rainbow – PURPLE, CORAL, and JADE. The diagnosis and cryptanalysis of RED by the Army Signal Intelligence Service started in 1935 and was completed in one year.
RED was replaced in 1940 by Angooki Taipu B, designated PURPLE; its cryptanalysis was completed just before the bombing of Pearl Harbor. Intelligence gleaned from PURPLE traffic gave the United States a decisive edge in World War II.

CRYPTOGRAPHIC SYSTEMS

When a pair of users encipher the data they exchange over a network, the cryptographic transformation they use must be specific to the users. A cryptographic system is a family
T ¼fTk: k [ K} of cryptographic transformations. A key k is an identifier specifying a transformation Tk in the family T . The key space K is the totality of all key values. In some way the sender and receiver agree on a particular k and encipher their data with the enciphering transformation Tk.
Encipherment originally involved pen-and-pencil calculations. Mechanical devices
were introduced to speed up encipherment in the eighteenth century, and they in turn were replaced by electromechanical devices a century later. Encipherment today is often implemented in software ; Tk is an algorithm whose input consists of plaintext x and key k and with ciphertext y as output.

Wi-Fi transitions computing

It is easy to underestimate the impact of wireless computing. It has become a common sight to see cafés full of people all connected up to a server, yet not a cable in sight. An entire business has grown up around wireless computing - the Internet cafe - that lets anyone, for a small charge, piggyback the establishment's Wi-Fi connection and surf to their hearts content. It is even possible to connect wirelessly using a mobile phone - a situation that would have been unthinkable 25 years ago when you were lucky if you could connect to another phone number using the first mobile 'bricks', let along the fledgling Internet.

It has released the constraints of an old system of doing business. But how does wireless computing actually work?

There are two types of what has become known as 'wireless' Internet - either connection through a router (your standard Wi-Fi) or through the mobile phone network. Wireless routers are the most common form of land based system, and are fundamentally a small connection box that allows a signal to be shared between several computers. Basically, computers 'tap in' to the signal, which can be made even easier by adding a wireless interface card. These usually come as standard with most new laptops, but can be bought as a separate add-on. USB routers and dongles also give anyone the power to be able to tap into any wireless signal, creating their own 'access points' through which the computer can send and receive network data.

Another key component of wireless computing is the actual hardware itself, namely the laptop. There is some contention as to who actually invented the concept of the laptop, but most cite Adam Osborne as the originator of the modern day laptop in 1981, although the 'clam-shell' design was attributed to William Moggridge and developed for GRiD Systems Corporation in 1979. It is difficult now to imagine life without laptops, yet it has only been a little over 30 years since their original conception. But probably the biggest influence on wireless computing was the development of WAP for mobile phones, allowing anyone to connect to the Internet using their mobile phone technology. Today, we feel short-changed if our mobile phone can't connect to the Internet on demand, 24/7.

How WIFI is useful in home Purpose?

A whole range of utilities and gadgets are jumping on the WIFI train. You can have a baby phone camera system that uses your home WIFI network to transmit the images & audio from the camera to your Computer or PocketPC. Or what about the WIFI rabbit? It is a toy rabbit that is permanently connected to the Internet. You can talk to it, giving it commands. It can talk too and Alert for incoming emails, SMS, phone, etc. You can have it read out your email, the weather, information of the stock market and so on.

One of the coolest applications of WIFI is IMHO the airport express access point from Apple. It is a wireless access point, effectively creating a wireless network. It has a USB port so you can connect a printer to it and print wireless. But the absolute coolest thing about it is it’s mini jack audio port. Here you can connect any speaker system or HIFI equipment. So basically for €129 you can have a WIFI network that can print without cables and you can play your music wireless, without cables! And that is not all. With the latest version of iTunes you can now use multi speakers. That basically means that you can connect various Airport Expresses together and have the music come out all of them simultaneously. At home I know have music coming out of the living room, bedroom, office and kitchen all at once!

What is IPTV or internet protocol television & how it Works?

How Do I Get This Technology?

To receive this technology you will need a special box and you will also require subscription with a provider. Subscriptions also normally include phone and internet service. As telephone wires are part of the broadcasting technology, you'll need to contact your phone service about details. While the market for this internet based technology is presently controlled by telephone services, as the market grows and the technology develops, other companies will most likely become involved.

Worldwide Expansion

IPTV is bound to keep growing throughout The United States and the rest of the world. One advantage of this exciting entertainment technology is that it will allow you to watch more shows than are normally scheduled on your favourite television network. You'll be able to search around for other shows you might like to watch while watching a currently airing program. You will be able to search by using terms such as actors' named, directors' names, and program titles. Broadband based streaming is also far steadier and clearer than that of typical internet streaming. The reception is better and there are not so many annoying paused.

Greater Options With IPTV

If you are the kind of viewer who likes to explore greater options when you watch television, then IPTV might be the right choice for you. This amazing technology will allow you to discover more programmes on a similar subject that you find interesting. It will also allow you to explore the careers of your favourite actors. If you're a busy person with full-time work, but you still like to follow certain shows, then internet based television might also work for you. It will allow you to watch programs that have already aired, that way you can keep up with your favourite sports team or prime time storyline. Finally, broadcasting over broadband opens up many opportunities for interactive television. Incredibly, this means that looking to the future, you may be able to guess along with your favourite game shows, and you won't be just a viewer watching from the sidelines sitting in your home anymore. You'll be a part of the show.

A Waterproof Shower TV?

Another great way of utilising this technology is in the installation of a waterproof shower and bathroom television. These waterproof televisions are IP enabled and completely safe. We'll see more and more of these luxury TV's appearing in regular homes as the price drops with time. What can possibly be better than watching the latest episode of 24 whilst relaxing in the bath!

how to create Virtual Office setup?

Some people might not even realize the necessity of anti-virus programs to help protect their computers from being bugged. This is sadly the truth but if you are a tech whiz you can possibly make money from this rather lucrative field. Ever heard of virtual office solution? It is either a full time or part time work depending on how committed you would like to be.

Your main role is to help troubleshoot problems faced by clients. Trust me they can be really easy and solved within minutes or even SECONDS! You can either do so via online conferencing or by means such as engaging in a telephone call. When offering virtual workplace solutions, it is wise to join a company that offers virtual office services such as virtual office space etc. This is where a good team of technical support is needed to provide assistance in helping the company that specializes in virtual office services.

So if you cannot find a job or running low in cash, you can always turn to this job provided that you have a sound knowledge in technical issues.

how to write / BURN WMV files in DVD?

It must be noted that DVD optical drives has write technology. There are many types of DVD media which are accessible and various optical drives just work with a sing le file type. For example, various types of DVD recordable available contain DVD + R and DVD - R. There is as well DVD-RAM. Actually, most of the optical drives sold nowadays could work with any of the three types; however, some just work with only one DVD media type.

Convert WMV to DVD via Burning

Here are usual steps to follow when burning WMV to DVD

1. Launch Software

You will need to launch first the software and select to burn data or create a copy of the DVD.

2. Load Your Source File

Your source file would be WMV File. When the source file is on a DVD or CD ROM already, you must insert it to the drive. Once it's on your hard-drive, you would only need to choose it.

3. Choose Destination and Format

In this case, it would be the DVD drive. Certain merchandises come with the capacity to convert video files to other types. However, because we're keeping the file similar, you don't have to change the file format.

4. Choose Speed and Quality of File

Several optical drives could burn up to 52X, which is quite fast. If you have the time, select a slower speed, like 10X to 12X, this would usually ensure quality-burning.

5. Burn the DVD

6. Confirm that the burning of the WMV onto DVD was a Success

Making HTTP Safe

People use web transactions for serious things. Without strong security, people wouldn't feel comfortable doing online shopping and banking. Without being able to restrict access, companies couldn't place important documents on web servers. The Web requires a secure form of HTTP.

The previous chapters talked about some lightweight ways of providing authentication (basic and digest authentication) and message integrity (digest qop="auth-int"). These schemes are good for many purposes, but they may not be strong enough for large purchases, bank transactions, or access to confidential data. For these more serious transactions, we combine HTTP with digital encryption technology.

A secure version of HTTP needs to be efficient, portable, easy to administer, and adaptable to the changing world. It also has to meet societal and governmental requirements. We need a technology for HTTP security that provides:

· Server authentication (clients know they're talking to the real server, not a phony)

· Client authentication (servers know they're talking to the real user, not a phony)

· Integrity (clients and servers are safe from their data being changed)

· Encryption (clients and servers talk privately without fear of eavesdropping)

· Efficiency (an algorithm fast enough for inexpensive clients and servers to use)

· Ubiquity (protocols are supported by virtually all clients and servers)

· Administrative scalability (instant secure communication for anyone, anywhere)

· Adaptability (supports the best known security methods of the day)

· Social viability (meets the cultural and political needs of the society)

Trails of Breadcrumbs

Unfortunately, keeping track of where you've been isn't always so easy. At the time of this writing, there are billions of distinct web pages on the Internet, not counting content generated from dynamic gateways.

If you are going to crawl a big chunk of the world's web content, you need to be prepared to visit billions of URLs. Keeping track of which URLs have been visited can be quite challenging. Because of the huge number of URLs, you need to use sophisticated data structures to quickly determine which URLs you've visited. The data structures need to be efficient in speed and memory use.

Speed is important because hundreds of millions of URLs require fast search structures. Exhaustive searching of URL lists is out of the question. At the very least, a robot will need to use a search tree or hash table to be able to quickly determine whether a URL has been visited.

Hundreds of millions of URLs take up a lot of space, too. If the average URL is 40 characters long, and a web robot crawls 500 million URLs (just a small portion of the Web), a search data structure could require 20 GB or more of memory just to hold the URLs (40 bytes per URL X 500 million URLs = 20 GB)!


Crawlers and Crawling

Web crawlers are robots that recursively traverse information webs, fetching first one web page, then all the web pages to which that page points, then all the web pages to which those pages point, and so on. When a robot recursively follows web links, it is called a crawler or a spider because it "crawls" along the web created by HTML hyperlinks.

Internet search engines use crawlers to wander about the Web and pull back all the documents they encounter. These documents are then processed to create a searchable database, allowing users to find documents that contain particular words. With billions of web pages out there to find and bring back, these search-engine spiders necessarily are some of the most sophisticated robots. Let's look in more detail at how crawlers work.

Web Intermediaries

Web proxy servers are middlemen that fulfill transactions on the client's behalf.Without a web proxy, HTTP clients talk directly to HTTP servers. With a web proxy, the client instead talks to the proxy, which itself communicates with the server on the client's behalf. The client still completes the transaction, but through the good services of the proxy server.

A proxy server can be dedicated to a single client or shared among many clients. Proxies dedicated to a single client are called private proxies. Proxies shared among numerous clients are called public proxies.

Public proxies

Most proxies are public, shared proxies. It's more cost effective and easier to administer a centralized proxy. And some proxy applications, such as caching proxy servers, become more useful as more users are funneled into the same proxy server, because they can take advantage of common requests between users.

Private proxies

Dedicated private proxies are not as common, but they do have a place, especially when run directly on the client computer. Some browser assistant products, as well as some ISP services, run small proxies directly on the user's PC in order to extend browser features, improve performance, or host advertising for free ISP services.

Web Server Implementations

Web servers implement HTTP and the related TCP connection handling. They also manage the resources served by the web server and provide administrative features to configure, control, and enhance the web server.

The web server logic implements the HTTP protocol, manages web resources, and provides web server administrative capabilities. The web server logic shares responsibilities for managing TCP connections with the operating system. The underlying operating system manages the hardware details of the underlying computer system and provides TCP/IP network support, filesystems to hold web resources, and process management to control current computing activities.

Web servers are available in many forms:

· You can install and run general-purpose software web servers on standard computer systems.

· If you don't want the hassle of installing software, you can purchase a web server appliance, in which the software comes preinstalled and preconfigured on a computer, often in a snazzy-looking chassis.

· Given the miracles of microprocessors, some companies even offer embedded web servers implemented in a small number of computer chips, making them perfect administration consoles for consumer devices.

Servers and 100 Continue

If a server receives a request with the Expect header and 100-continue value, it should respond with either the 100 Continue response or an error code (see Table 3-9). Servers should never send a 100 Continue status code to clients that do not send the 100-continue expectation. However, as we noted above, some errant servers do this.

If for some reason the server receives some (or all) of the entity before it has had a chance to send a 100 Continue response, it does not need to send this status code, because the client already has decided to continue. When the server is done reading the request, however, it still needs to send a final status code for the request (it can just skip the 100 Continue status).


Safe Methods

HTTP defines a set of methods that are called safe methods. The GET and HEAD methods are said to be safe, meaning that no action should occur as a result of an HTTP request that uses either the GET or HEAD method.

By no action, we mean that nothing will happen on the server as a result of the HTTP request. For example, consider when you are shopping online at Joe's Hardware and you click on the "submit purchase" button. Clicking on the button submits a POST request (discussed later) with your credit card information, and an action is performed on the server on your behalf. In this case, the action is your credit card being charged for your purchase.

There is no guarantee that a safe method won't cause an action to be performed (in practice, that is up to the web developers). Safe methods are meant to allow HTTP application developers to let users know when an unsafe method that may cause some action to be performed is being used. In our Joe's Hardware example, your web browser may pop up a warning message letting you know that you are making a request with an unsafe method and that, as a result, something might happen on the server (e.g., your credit card being charged).

Differential backup

Backs up the specified files if the files have changed since the last backup. This type doesn’t mark the files as having been backed up, however. (A differential backup is somewhat like a copy command. Because the file is not marked as having been backed up, a later differential or incremental backup will back up the file again.)

Managing User Accounts and Groups Using Windows Server 2010

2010As we saw earlier when looking at the Microsoft networking family, 2010 2010Windows 2010 is designed to provide far greater security than Windows 2010 201095. As such, Windows 2010 is the centerpiece of any Microsoft 2010 2010network where security is a major issue. An organization might 2010 2010not feel that its everyday documents require security, but most 2010 2010companies have payroll information or other data that they want 2010 2010to guard from access by unauthorized individuals. As you will soon 2010 2010see, Windows 2010—both Workstation and Server—can maintain a 2010 2010user database that makes the 2010 products far more flexible in 2010 2010meeting security needs. 2010 2010Throughout your MCSE testing, keep in mind a few very basic 2010 2010conceptual frameworks. One of the most important is the interaction 2010 2010of user accounts, Global groups, and Local groups.

Passive Hubs

Passive hubs do not contain any electronic components and do not process the data signal in any way. The only purpose of a passive hub is to combine the signals from several network cable segments. All devices attached to a passive hub receive all the packets that pass through the hub. Because the hub doesn’t clean up or amplify the signals (in fact, the hub absorbs a small part of the signal), the distance between a computer and the hub can be no more than half the maximum permissible distance between two computers on the network. For example, if the network design limits the distance between two computers to 200 meters, the maximum distance between a computer and the hub is 100 meters. As you might guess, the limited functionality of passive hubs makes them inexpensive and easy to configure. That limited functionality, however, is also the biggest disadvantage of passive hubs.
ARCnet networks commonly use passive hubs. Token Ring networks also can use passive hubs, although the industry trend is to utilize active hubs to obtain the advantages

Hot Standby Router Protocol (HSRP)

Hot Standby Router Protocol (HSRP) was developed by Cisco Systems as a way of providing fault tolerance for routed internetworks. Normally, when a router goes down, routing protocols communicate this fact to all the routers on the network, which then reconfigure their routing tables to select alternate routes to avoid the downed router. The problem is that the process of convergence, the updating of all routing tables on the network, is slow and can take some time to complete. HSRP was designed to work around this problem of slow convergence by allowing a standby router take over from a router that has gone down and fill its role in a manner completely transparent to hosts on the network.

HSRP works by creating virtual groups that consist of two routers, an active router and a standby router. HSRP creates one virtual router in place of each pair of actual routers, and hosts need only to have their gateway address pointed to the virtual router. When a host forwards an Internet Protocol (IP) packet to its default gateway (the virtual router), it is actually handled by the active router. Should the active router go down, the standby router steps in and continues to forward packets it receives. To IP hosts on the network, the virtual router acts like a real router and everything works transparently whether the active or standby router actually does the forwarding.


Host Integration Server

Microsoft Host Integration Server 2000 is the successor to Microsoft's previous BackOffice gateway platform, SNA Server. Host Integration Server is part of the new .NET Enterprise Server platform and provides access to applications and data stores on IBM mainframe and AS/400 platforms. Host Integration Server supports network interoperability between Microsoft Windows 2000 client/server networks and Systems Network Architecture (SNA)-based mainframe computing platforms. Host Integration Server helps enterprises maximize their existing investments in legacy mainframe systems while leveraging the power of the Windows 2000 Server platform.

Host names are a friendlier way of identifying TCP/IP hosts than Internet Protocol (IP) addresses. Host names can be resolved into IP addresses by host name resolution using either a Domain Name System (DNS) server or the hosts file. Host names can include the characters a-z, A-Z, 0-9, period, and dash (-). To ensure full compatibility with DNS, do not use any other special characters in host names.



horizontal cabling

Horizontal cabling is usually installed in a star topology that connects each work area to the wiring closet, as shown in the illustration. Four-pair 100-ohm unshielded twisted-pair (UTP) cabling (Category 5 [Cat5] cabling or enhanced Category 5 [Cat5e] cabling) is usually recommended for new installations because it supports both voice and high-speed data transmission. To comply with Electronic Industries Association/Telecommunications Industries Association (EIA/TIA) wiring standards, individual cables should be limited to 295 feet (90 meters) in length between the wall plate in the work area and the patch panels in the wiring closet. Patch cords for connecting the patch panel to hubs and switches in the wiring closet should be no longer than 23 feet (7 meters) total, with a maximum of two patch cords per line, neither of which can exceed 19.7 feet (6 meters) in length. Cables connecting users' computers to wall plates should be limited to 9.8 feet (3 meters) in length.



High-Speed Token Ring (HSTR)

High-Speed Token Ring (HSTR) was a development project of the High-Speed Token Ring Alliance, which consisted of Token Ring hardware vendors IBM, Olicom, and Madge Networks. HSTR was initially intended as a 100 megabits per second (Mbps) version of 802.5 Token Ring networking architecture that would provide a logical upgrade for existing 4 and 16 Mbps Token Ring networks. Speeds up to 1 gigabit per second (Gbps) were envisioned as being possible down the line.


Unfortunately, customer interest in HSTR waned with the rapid development of Fast Ethernet and Gigabit Ethernet (GbE) technologies, and in 1998 IBM withdrew from the alliance and the effort collapsed. The effective result is that Ethernet has finally won the local area network (LAN) wars with Token Ring and Fiber Distributed Data Interface (FDDI) networking architectures, and these two architectures are now considered legacy technologies that have no real future.


High-Speed Circuit Switched Data (HSCSD)

High-Speed Circuit Switched Data (HSCSD) is the first upgrade available for GSM that boosts GSM's data- carrying capacity above its current maximum of 14.4 kilobits per second (Kbps). HSCSD is a pre-2.5G upgrade for GSM that has been deployed by a few carriers in lieu of General Packet Radio Service (GPRS), a 2.5G upgrade for GSM and other Time Division Multiple Access (TDMA) cellular systems that is just beginning to be widely deployed. Unlike GPRS, which requires that existing base station hardware be upgraded, HSCSD is a software-only upgrade and is easily implemented.

HSCSD works by aggregating together groups of GSM time slots. Each GSM frame consists of eight time slots, each of which can provide up to 14.4 Kbps throughput for data transmission. HSCSD uses one slot for upstream communications and aggregates four slots for downstream communications. HSCSD is thus an asymmetrical data transmission technology that supports speeds of 14.4 Kbps upstream and 57.6 Kbps downstream. However, the overhead required by slot aggregation usually results in downstream bandwidth of only 28.8 Kbps instead of the theoretical 57.6 Kbps.


High-Performance Parallel Interface (HIPPI)

High-Performance Parallel Interface (HIPPI) is an American National Standards Institute (ANSI) standard for point-to-point networking at gigabit speeds. HIPPI was developed in the 1980s as a technology for interconnecting supercomputers, mainframes, and their storage devices. Fibre Channel is the closest networking technology to HIPPI in terms of its use and capabilities and is more widely used than HIPPI.


HIPPI operates at the physical and data-link layers to provide connection-oriented communications between two points. HIPPI can operate at either 800 megabits per second (Mbps) or 1.6 gigabits per second (Gbps), and it has a simple flow control command set that can establish or tear down a connection in under a microsecond using a HIPPI switch. There is also a new standard called HIPPI-6400 that supports speeds up to 6.4 Gbps for distances up to 164 feet (50 meters) over twisted pair copper and up to 0.62 miles (1 kilometer) over fiber-optic cabling.


High-level Data Link Control (HDLC)

High-level Data Link Control (HDLC) is a data-link layer protocol for synchronous communication over serial links. HDLC was developed in the 1970s by IBM as an offshoot of the Synchronous Data Link Control (SDLC) protocol for their Systems Network Architecture (SNA) mainframe computing environment. It was later standardized by the International Organization for Standardization (ISO) as a standard Open Systems Interconnection (OSI) Layer-2 protocol.

HDLC is called an encapsulation protocol because it encapsulates bit stream data into fixed-length frames for transmission over synchronous serial lines. HDLC is a bit-stream protocol (bit streams are streams of data not broken into individual characters) that uses a 32-bit checksum for error correction and supports full-duplex communication. HDLC frames consist of a special flag byte (01111110) followed by address and control information, data bits, and a cyclic redundancy check (CRC) byte. A control field at the start of a frame is used for establishing and terminating data link connections. As a data-link protocol, HDLC is responsible for ensuring that data is received intact, without errors, and in the proper sequence. HDLC also supports flow-control mechanisms for managing synchronous data streams. HDLC also supports multiple protocols but does not include any mechanisms for authentication (since authentication is not really needed in dedicated point-to-point communications).

High-bit-rate Digital Subscriber Line (HDSL)

High-bit-rate Digital Subscriber Line (HDSL) can be used to transmit data over existing copper local loop connections at T1 or E1 speeds. It is used to transport data only (HDSL signals overlap the voice portion of the Plain Old Telephone Service [POTS] spectrum and therefore cannot carry voice) and is generally used in wide area networking (WAN) scenarios. The maximum distance for HDSL transmission is typically 15,000 feet (4500 meters) when running over unconditioned copper twisted-pair telephone wiring. Some providers, however, claim that their devices support twice this distance. This maximum distance from the telco's central office (CO) to an HDSL customer's premises is sometimes called the Carrier Service Area (CSA).

HDSL was the earliest symmetric version of DSL to be widely implemented and was designed in the early 1990s as an alternative to traditional T-1 services. T1 lines originally required intermediate repeaters to be installed every 6000 feet (1830 meters) between termination points in order to ensure the signal strength necessary to transport data at such high speeds. HDSL was developed by Bellcore as a repeaterless form of T1 that would save the cost of installing repeaters and speed deployment of T1 lines for customers.

Hierarchical Storage Management (HSM)

A Hierarchical Storage Management (HSM) system is a way of providing users with seemingly endless amounts of storage space. This is accomplished by moving much of the data from hard disks to an archival storage system, such as an optical drive or tape library. Pointers are created on the hard disks indicating where the archived data is located. Users who need access to data need only request it from the disk, and if the requested data has been archived, the pointers allow the data to be found and returned to the user. The whole process is transparent from the user's viewpoint-all the data appears to be stored on the hard disks.

Microsoft Cluster Server

An internal communications interface in Microsoft Cluster Server (MSCS) in Microsoft Windows NT Server, Enterprise Edition, and in the Cluster service in Windows 2000 Advanced Server and Windows .NET Enterprise Server, the heartbeat continuously provides interserver communication between cluster nodes in a cluster. One function of the heartbeat is to generate a message that the Cluster service running on one node regularly sends to the Cluster service on the other node to detect a failure within the cluster. Nodes in a cluster communicate status information with each other through the heartbeat using MSCS or the Cluster service. These messages appear as network traffic between the two nodes in the cluster on the dedicated network connection between the nodes, which is called the private network of the cluster. The primary heartbeat network interface is usually a crossover network cable directly attached between cluster nodes or in a private network. If the heartbeat is lost over the private connection, Cluster Server reverts to a public network or alternate connection for its heartbeat and other Cluster service traffic. You can configure the polling interval for the heartbeat using Cluster Administrator.

Hardware Compatibility List (HCL)

The Hardware Compatibility List (HCL) for a Microsoft Windows platform defines all supported hardware, including computer systems as well as individual hardware components such as video cards, motherboards, and sound cards. When in doubt, you should consult the HCL before installing an operating system on a nonstandard or customized machine or installing new hardware into a system with an existing operating system. Using components not included on the HCL can lead to installation failures or system instabilities. Microsoft has determined that drivers for hardware devices listed on the HCL are compatible with that version of Windows; Microsoft supports only these drivers.

If you use drivers for devices that are not on the HCL, you might not be entitled to Microsoft product support for your system configuration. If you must use non- HCL devices, contact the device's manufacturer to see whether a supported driver exists for the particular Windows operating system you are using. If you contact Microsoft Product Support Services (PSS) about a problem and the support engineer determines that a hardware device in your system is not on the HCL, you will likely incur a charge for the call even if the problem cannot be resolved.

hardware abstraction layer (HAL)

The hardware abstraction layer (HAL) hides hardware differences from the operating system so that uniform code can be used for all hardware. The HAL thus offers a uniform interface between the underlying hardware and the higher layers of the operating system. All underlying hardware looks the same to the Windows 2000, Windows XP, and Windows .NET Server operating systems because they "see" the hardware through the filtered glasses of the HAL.

The HAL is located at the base of the Executive Services, and it encapsulates most hardware-specific functions that are performed by the operating system. If another portion of the operating system wants to access a hardware device, it must refer its request to the HAL. The HAL handles all communication between the kernel of the operating system and the hardware, particularly those regarding processor commands, system interrupts, and input/output (I/O) interfaces.


Handheld Device Markup Language (HDML)

Handheld Device Markup Language (HDML) was developed by Unwired Planet (now Phone.com) to enable mobile communications devices like cell phones and Personal Digital Assistants (PDAs) to access content on the Internet. Such devices usually have limited size displays (typically four lines by 20 characters or smaller) and limited processing power that make them require that information they download from the Internet be specially formatted to meet these requirements.

HDML is not intended as a means of delivering standard HTML-formatted Web content to such devices-most standard Web pages simply cannot be reformatted to fit on devices with such small display areas. HDML is not a subset or scaled-down version of HTML but an entirely new markup language specifically designed from the ground up for these devices.

HDML can be used to deliver a broad range of time-sensitive information to handheld wireless devices, including appointments, weather information, stock quotes, telephone directory white pages, inventory, catalog pricing, and similar business and commercial information. Scripts can be developed to extract this kind of information from the databases in which it resides and format it into HDML cards in much the same way that Perl can be used to write scripts to access database information stored on UNIX servers or that Active Server Pages (ASP) can be used to build Web- based applications that connect to Structured Query Language (SQL) databases.

HDML is an open standard guided and licensed by Phone.com. The current version of this technology is HDML 2.

1990s an explosive decade For Computers

The 1990s were an explosive decade in every aspect of networking, and we can only touch on a few highlights here. Ethernet continued to evolve as a LAN technology and began to eclipse competing technologies such as Token Ring and FDDI. In 1991, Kalpana Corporation began marketing a new form of bridge called a LAN switch, which dedicated the entire bandwidth of a LAN to a single port instead of sharing it among several ports. Later known as Ethernet switches or Layer 2 switches, these devices quickly found a niche in providing dedicated high-throughput links for connecting servers to network backbones. Layer 3 switches soon followed, eventually displacing traditional routers in most areas of enterprise networking except for WAN access. Layer 4 and higher switches are now popular
in server farms for load balancing and fault tolerance purposes.

The rapid evolution of the PC computing platform and the rise of bandwidth-hungry applications created a need for something faster than 10-Mbps Ethernet, especially on network backbones. The first full-duplex Ethernet products, offering speeds of 20 Mbps, became available in 1992. In 1995 work began on a standard for full-duplex Ethernet; it was finalized in 1997. A more important development was Grand Junction Networks' commercial Ethernet bus, introduced in 1992, which functioned at 100 Mbps. Spurred by this advance, the 802.3 group produced the 802.3u 100BaseT Fast Ethernet standard for transmission of data at 100 Mbps over both twisted-pair copper wiring and fiber-optic cabling.

Although the jump from 10-Mbps to 100-Mbps Ethernet took almost 15 years, a year after the 100BaseT Fast Ethernet standard was released, work began on a 1000-
Mbps version of Ethernet popularly known as Gigabit Ethernet (GbE). Fast Ethernet was beginning to be deployed at the desktop, and this was putting enormous strain on the FDDI backbones that were deployed on many commercial and university campuses. FDDI also operated at 100 Mbps (or 200 Mbps if fault tolerance was discarded in favor of carrying traffic on the redundant ring), so a single Fast Ethernet desktop connection could theoretically saturate the capacity of the entire network backbone. Asynchronous Transfer Mode (ATM), a broadband cell-switching technology used primarily in telecommunication/WAN environments, was briefly considered as a possible successor to FDDI for backboning Ethernet networks together, and LAN emulation (LANE) was developed to carry LAN traffic such as Ethernet over ATM. However, ATM is much more complex than Ethernet, and a number of companies saw extending Ethernet speeds to 1000 Mbps as a way to provide network backbones with much greater capacity using technology that most network administrators were already familiar with. As a result, the 802 group called 802.3z developed a GbE standard called 1000BaseX, which it released in 1998. Today GbE is the norm for LAN backbones, and Fast Ethernet is becoming ubiquitous at the desktop level. Work is even underway on extending Ethernet technologies to 10 gigabits per second (Gbps). A competitor of GbE for high-speed collapsed backbone interconnects, called Fibre Channel, was conceived by an ANSI committee in 1988 but is used mainly for storage area networks (SANs).

The 1990s saw huge changes in the landscape of telecommunications providers and their services. "Convergence" became a major buzzword, signifying the combining of voice, data, and broadcast information into a single medium for delivery to businesses and consumers through broadband technologies such as metropolitan Ethernet, Digital Subscriber Line (DSL), and cable modem systems. The cable modem was introduced in 1996, and by the end of the decade broadband residential Internet access through cable television systems had become a strong competitor with telephone-based systems such as Asymmetric Digital Subscriber Line (ADSL) and G.Lite, another variant of DSL.

Also in the 1990s, Voice over IP (VoIP) emerged as the latest "Holy Grail" of networking and communications and promised businesses huge savings by routing voice telephone traffic over existing IP networks. VoIP technology works, but the bugs are still being ironed out and deployments remain slow. Recent developments in VoIP standards, however, may help propel deployment of this technology in coming years.

The first public frame relay packet-switching services were offered in North America in 1992. Companies such as AT&T and Sprint installed a network of frame relay nodes across the United States in major cities, where corporate networks could connect to the service through their local telco. Frame relay began to eat significantly into the deployed base of more expensive dedicated leased lines such as the T1 or E1 lines that businesses used for their WAN solutions, resulting in lower prices for these leased lines and greater flexibility of services.
In Europe frame relay has been deployed much more slowly, primarily because of the widespread deployment of packet-switching networks such as X.25.

The Telecommunications Act of 1996 was designed to spur competition in all aspects of the U.S. telecommunications market by allowing the RBOCs access to long-distance services and IXCs access to the local loop. The result has been an explosion in technologies and services offered by new companies called competitive local exchange carriers (CLECs), with mergers and acquisitions changing the nature of the service provider landscape almost daily.

The 1990s saw a veritable explosion in the growth of the Internet and the development of Internet technologies. As mentioned earlier, ARPANET was replaced in 1990 by NSFNET, which by then was commonly called the Internet. At the beginning of the 1990s, the Internet's backbone consisted of 1.544-Mbps T1 lines connecting various institutions, but in 1991 the process
of upgrading these lines to 44.735-Mbps T3 circuits began. By the time the Internet Society (ISOC) was chartered in 1992, the Internet had grown to an amazing 1 million hosts on almost 10,000 connected networks. In 1993 the NSF created Internet Network Information Center (InterNIC) as a governing body for DNS. In 1995 the NSF stopped sponsoring the Internet backbone and NSFNET went back to being a research and educational network. Internet traffic in the United States was routed through a series of interconnected commercial network providers.

The first commercial Internet service providers (ISPs) emerged in the early 1990s when the NSF removed its restrictions against commercial traffic on the NSFNET. Among these early ISPs were Performance Systems International (PSI), UUNET, MCI, and Sprintlink. (The first public dial-up ISP was actually The World, with the URL www.world.std.com.) In the mid-1990s, commercial online networks such as AOL, CompuServe, and Prodigy provided gateways to the Internet to subscribers. Later in the decade, Internet deployment grew exponentially, with personal Internet accounts proliferating by the tens of millions around the world, new technologies and services developing, and new paradigms evolving for the economy and business. It would take a whole book to talk about all the ways the Internet has changed our lives.

Many Internet technologies and protocols have come and gone quickly. Archie, an FTP search engine developed in 1990, is hardly used today. The WAIS protocol for indexing, storing, and retrieving full-text documents, which was developed in 1991, has been eclipsed by Web search technologies. Gopher, which was created in 1991, grew to a worldwide collection of interconnected file systems, but most Gopher servers have now been turned off. Veronica, the Gopher search tool developed in 1992, is obviously obsolete as well. Jughead later supplemented Veronica but has also become obsolete. (There never was a Betty.)

The most obvious success story among Internet protocols has been HTTP, which, with HTML and the system of URLs for addressing, has formed the basis of the Web. Tim Berners-Lee and his colleagues created the first Web server (whose fully qualified DNS name was info.cern.ch) and Web browser software using the NeXT computing platform that was developed by Apple pioneer Steve Jobs. This software was ported to other platforms, and by the end of the decade more than 6 million registered Web servers were running, with the numbers growing rapidly.

Lynx, a text-based Web browser, was developed in 1992. Mosaic, the first graphical Web browser, was developed in 1993 for the UNIX X Windows platform by Marc Andreessen while he was a student at the National Center for Supercomputing Applications (NCSA). At that time, there were only about 50 known Web servers, and HTTP traffic amounted to only about 0.1 percent of the Internet's traffic. Andreessen left school to start Netscape Communications, which released its first version of Netscape Navigator in 1994. Microsoft Internet Explorer 2 for Windows 95 was released in 1995 and rapidly became Netscape Navigator's main competition. In 1995, Bill Gates announced Microsoft's wide-ranging commitment to support and enhance all aspects of Internet technologies through innovations in the Windows platform, including the popular Internet Explorer Web browser and the Internet Information Server (IIS) Web server platform of
Windows NT. Another initiative in this direction was Microsoft's announcement in 1996 of its ActiveX technologies, a set of tools for active content such as animation and multimedia for the Internet and the PC.

In cellular communications technologies, the 1990s were clearly the "digital decade." The work of the TIA resulted in 1991 in the first standard for digital cellular communication, the TDMA Interim Standard 54 (IS-54). Digital cellular was badly needed because the analog cellular subscriber market in the United States had grown to 10 million subscribers in 1992 and 25 million subscribers in 1995. The first tests of this technology, based on Time Division Multiple Access (TDMA) technology, took place in Dallas, Texas, and in Sweden, and were a success. This standard was revised in 1994 as TDMA IS-136, which is commonly referred to as Digital Advanced Mobile Phone Service (D-AMPS).

Meanwhile, two competing digital cellular standards also appeared. The first was the CDMA IS-95 standard for CDMA cellular systems based on spread spectrum technologies, which was first proposed by QUALCOMM in the late 1980s and was standardized by the TIA as IS-95 in 1993. Standards preceded implementation, however; it was not until 1996 that the first commercial CDMA cellular systems were rolled out.

The second system was the Global System for Mobile Communication (GSM) standard developed in Europe. (GSM originally stood for Groupe Spéciale Mobile.) GSM was first envisioned in the 1980s as part of the movement to unify the European economy, and the European Telecommunications Standards Institute (ETSI) determined the final air interface in 1987. Phase 1 of GSM deployment began in Europe in 1991. Since then, GSM has become the predominant system for cellular communication in over 60 countries in Europe, Asia, Australia, Africa, and South America, with over 135 mobile networks implemented. However, GSM implementation in the United States did not begin until 1995.

In the United States, the FCC began auctioning off portions of the 1900-MHz frequency band in 1994. Thus began the development of the higher-frequency Personal Communications System (PCS) cellular phone technologies, which were first commercially deployed in the United States in 1996.

Establishment of worldwide networking and communication standards continued apace in the 1990s. For example, in 1996 the Unicode character set, a character set that can represent any language of the world in 16-bit characters, was created, and it has since been adopted by all major operating system vendors.

In client/server networking, Novell in 1994 introduced Novell NetWare 4, which included the new Novell Directory Services (NDS), then called NetWare Directory Services. NDS offered a powerful tool for managing hierarchically organized systems of network file and print resources and for managing security elements such as users and groups. NetWare is now in version 6 and NDS is now called Novell eDirectory.

In other developments, the U.S. Air Force launched the twenty-fourth satellite of the Global Positioning System (GPS) constellation in 1994, making possible precise terrestrial positioning using handheld satellite communication systems. RealNetworks released its first software in 1995, the same year that Sun Microsystems announced the Java programming language, which has grown in a few short years to rival C/C++ in popularity for developing distributed applications. Amazon.com was launched in 1995 and has become a colossus of cyberspace retailing in a few short years. Microsoft WebTV, introduced in 1997, is beginning to make inroads into the residential Internet market.

Finally, the 1990s were, in a very real sense, the decade of Windows. No other technology has had as vast an impact on ordinary computer users as Windows, which brought to homes and workplaces the power of PC computing and the opportunity for client/server computer networking. Version 3 of Windows, which was released in 1990, brought dramatic increases in performance and ease of use over earlier versions, and Windows 3.1, released in 1992, quickly became the standard desktop operating system for both corporate and home users. Windows for Workgroups 3.1 quickly followed that same year. It integrated networking and workgroup functionality directly into the Windows operating system, allowing Windows users to use the corporate computer network for sending e-mail, scheduling meetings, sharing files and printers, and performing other collaborative tasks. In fact, it was Windows for Workgroups that brought the power of computer networks from the back room to users' desktops, allowing them to perform tasks previously possible only for network administrators.

In 1992, Microsoft released the first beta version of its new 32-bit network operating system, Windows NT. In 1993 came MS-DOS 6, as Microsoft continued to support users of text-based computing environments. That was also the year that Windows NT and Windows for Workgroups 3.11 (the final version of 16-bit Windows) were released. In 1995 came the long-awaited release of Windows 95, a fully integrated 32-bit desktop operating system designed to replace MS-DOS, Windows 3.1, and Windows for Workgroups 3.11 as the mainstream desktop operating system for personal computing. Following in 1996 was Windows NT 4, which included enhanced networking services and a new Windows 95-style user interface. Windows 95 was superseded by Windows 98 and later by Windows Millennium Edition (Me).

At the turn of the millennium came the long-anticipated successor to Windows NT, the Windows 2000 family
of operating systems, which includes Windows 2000 Professional, Windows 2000 Server, Windows 2000 Advanced Server, and Windows 2000 Datacenter Server. The Windows family has how grown to encompass the full range of networking technologies, from embedded devices and Personal Digital Assistants (PDAs) to desktop and laptop computers to heavy-duty servers running the most advanced, powerful, scalable, business-
critical, enterprise-class applications.

1980s the client/server LAN architectures

In the 1980s the growth of client/server LAN architectures continued while that of mainframe computing environments declined. The advent of the IBM PC in 1981 and the standardization and cloning of this architecture led to an explosion of PC-based LANs in businesses and corporations around the world, particularly with the release of the IBM PC AT hardware platform in 1984. The number of PCs in use grew from 2 million in 1981 to 65 million in 1991. Novell, which appeared on the scene in 1983, soon became a major player in file and print servers for LANs with its Novell NetWare platform.

However, the biggest development in the area of LAN networking in the 1980s was the continued evolution and standardization of Ethernet. While the DIX consortium worked on Ethernet standards in the late 1970s, the IEEE with its Project 802 initiative tried working toward a single unified LAN standard. When it became clear that this goal was impossible, Project 802 was divided into a number of separate working groups, with 802.3 focusing on Ethernet, 802.4 on Token Bus, and 802.5 on Token Ring technologies and standards. The work of the 802.3 group resulted in the first Ethernet standard, called 10Base5 or thicknet, which was almost identical to the version developed by DIX. 10Base5 was called thicknet because it used thick coaxial cable, and in 1985 the 802.3 standard was extended to include 10Base2 using thin coaxial cable, commonly called thinnet.

Through most of the 1980s, coaxial cable was the main form of cabling used for implementing Ethernet. A company called SynOptics Communications, however, developed a product called LattisNet that was designed for transmitting 10-Mbps Ethernet over twisted-pair wiring using a star-wired topology that was connected to a central hub or repeater. This wiring was cheaper than coaxial cable and was similar to the wiring used in residential and business telephone wiring systems. LattisNet was such a commercial success that in 1990 the 802.3 committee approved a new standard called 10BaseT for Ethernet that ran over twisted-pair wiring. 10BaseT soon superseded the coaxial forms of Ethernet because of its ease of installation and because its hierarchical star-
wired topology was a good match for the architectural topology of multistory buildings.

In other Ethernet developments, fiber-optic cabling, first developed in the early 1970s by Corning, found its first commercial networking application in Ethernet networking in 1984. (The technology itself was standardized as 10BaseFL in the early 1990s.) In 1988 the first fiber-optic transatlantic undersea cable was laid and greatly increased the capacity of transatlantic communication systems.

Ethernet bridges became available in 1984 from DEC and were used both to connect separate Ethernet LANs to make large networks and to reduce traffic bottlenecks on overloaded networks by splitting them into separate segments. Routers could be used for similar purposes, but bridges generally offered better price and performance, as well as less complexity, during the 1980s. Again, market developments preceded standards, as the IEEE 802.1D Bridge Standard, which was initiated in 1987, was not standardized until 1990.

In the UNIX arena, the development of the Network File System (NFS) by Sun Microsystems in 1985 resulted in
a proliferation of diskless UNIX workstations having built-in Ethernet interfaces. This development helped drive the demand for Ethernet and accelerated the evolution of Ethernet bridging technologies into today's switched networks. By 1985 the rapidly increasing numbers of UNIX hosts and LANs connected to the ARPANET began to transform it from what had been mainly a network of mainframe and minicomputer systems into something like what it is today. The first UNIX implementation of TCP/IP came in v4.2 of Berkeley's BSD UNIX, from which other vendors such as Sun Microsystems quickly ported their versions of TCP/IP. Although PC-based LANs rapidly grew in popularity in business and corporate settings during the 1980s, UNIX continued to dominate in academic and professional high-end computing environments as the mainframe environment declined.

IBM introduced its Token Ring networking technology in 1985 as an alternative LAN technology to Ethernet. IBM had submitted its technology to the IEEE in 1982 and the 802.5 committee standardized it in 1984. IBM soon supported the integration of Token Ring with its existing SNA networking services and protocols for IBM mainframe computing environments. The initial Token Ring specifications delivered data at 1 Mbps and 4 Mbps, but it dropped the 1-Mbps version in 1989 when it introduced a newer 16-Mbps version. Interestingly, no formal IEEE specification exists for 16-Mbps Token Ring--vendors simply adopted IBM's technology for the product. Efforts were made to develop high-speed Token Ring, but these have finally been abandoned and today Ethernet reigns supreme.

Also in the field of local area networking, in 1982 the American National Standards Institute (ANSI) began standardizing the specifications for Fiber Distributed Data Interface (FDDI). FDDI was designed to be a high-speed (100 Mbps) fiber-optic networking technology for LAN backbones on campuses and industrial parks. The final FDDI specification was completed in 1988, and deployment in campus LAN backbones grew during the late 1980s and the early 1990s. But today FDDI is considered legacy technology and has been superseded in most places by Fast Ethernet and Gigabit Ethernet (GbE).

In 1983 the ISO developed an abstract seven-layer model for networking called the Open Systems Interconnection (OSI) reference model. Although some commercial networking products were developed based on OSI protocols, the standard never really took off, primarily because of the predominance of TCP/IP. Other standards from the ISO and ITU that emerged in the 1980s included the X.400 electronic messaging standards and the X.500 directory recommendations, both of which held sway for a while but have now largely been supersededX.400 by the Internet's Simple Mail Transfer Protocol (SMTP) and X.500 by Lightweight Directory Access Protocol (LDAP).

A major event in the telecommunications/WAN field in 1984 was the divestiture of AT&T as the result of the seven-year antitrust suit brought against AT&T by the U.S. Justice Department. AT&T's 22 Bell operating companies were formed into 7 new RBOCs (only 4 are left today). This meant the end of the old Bell telephone system, but these RBOCs soon formed the Bellcore telecommunications research establishment to replace the defunct Bell Laboratories. The United States was then divided into Local Access and Transport Areas (LATAs), with intra-LATA communication handled by local exchange carriers (the Bell Operating Companies or BOCs) and inter-LATA communication handled by inter-exchange carriers (IXCs) such as AT&T, MCI, and Sprint Corporation.

The result of the breakup was increased competition, which led to new WAN technologies and generally
lower costs. One of the first effects was the offering of T1 services to subscribers in 1984. Until then, this technology had been used only for backbone circuits for long-distance communication. New hardware devices were offered to take advantage of the increased bandwidth, especially high-speed T1 multiplexers, or
muxes, that could combine voice and data in a single communication stream. The year 1984 also saw the development of digital Private Branch Exchange (PBX) systems by AT&T, bringing new levels of power and flexibility to corporate subscribers.

The Signaling System #7 (SS7) digital signaling system was deployed within the PSTN in the 1980s, first in Sweden and later in the United States. SS7 made new telephony services available to subscribers, such as caller ID, call blocking, and automatic callback.

The first trials of ISDN, a fully digital telephony technology that runs on existing copper local loop lines, began in Japan in 1983 and in the United States in 1987. All major metropolitan areas in the United States have since been upgraded to make ISDN available to those who want it, but ISDN has not caught on in the United States as a WAN technology as much as it has in Europe.

The 1980s also saw the standardization of SONET technology, a high-speed physical layer (PHY) fiber-optic networking technology developed from time-division multiplexing (TDM) digital telephone system technologies. Before the divestiture of AT&T in 1984, local
telephone companies had to interface their own TDM-
based digital telephone systems with proprietary TDM schemes of long-distance carriers, and incompatibilities created many problems. This provided the impetus for creating the SONET standard, which was finalized in 1989 through a series of Comité Consultatif International Télégraphique et Téléphonique (CCITT; anglicized as International Telegraph and Telephone Consultative Committee) standards known as G.707, G.608, and G.709. By the mid-1990s almost all long-
distance telephone traffic in the United States used SONET on trunk lines as the physical interface.

The 1980s brought the first test implementations of Asynchronous Transfer Mode (ATM) high-speed cell-
switching technologies, which could use SONET as the physical interface. Many concepts basic to ATM were developed in the early 1980s at the France-Telecom laboratory in Lannion, France, particularly the PRELUDE project, which demonstrated the feasibility of end-to-
end ATM networks running at 62 Mbps. The CCITT standardized the 53-byte ATM cell format in 1988, and the new technology was given a further push with the creation of the ATM Forum in 1991. Since then, use of ATM has grown significantly in telecommunications provider networks and has become a high-speed backbone technology in many enterprise-level networks around the world. However, the vision of ATM on users' desktops has not been realized because of the emergence of cheaper Fast Ethernet and GbE LAN technologies and because of the complexity of ATM itself.

The convergence of voice, data, and broadcast information remained a distant vision throughout the 1980s and was even set back because of the proliferation of networking technologies, the competition between cable and broadcast television, and the slow adoption of
residential ISDN. New services did appear, however, especially in the area of commercial online services such as America Online (AOL), CompuServe, and Prodigy, which offered consumers e-mail, bulletin board systems (BBSs), and other services.

A significant milestone in the development of the Internet occurred in 1982 when the networking protocol of ARPANET was switched from NCP to TCP/IP. On January 1, 1983, NCP was turned off permanently--anyone who had not migrated to TCP/IP was out of luck. ARPANET, which connected several hundred systems, was split into two parts, ARPANET and MILNET.

The first international use of TCP/IP took place in 1984 at the Conseil Européen pour la Recherche Nucléaire (CERN), a physics research center located in Geneva, Switzerland. TCP/IP was designed to provide a way of networking different computing architectures in heterogeneous networking environments. Such a protocol was badly needed because of the proliferation of vendor-
specific networking architectures in the preceding decade, including "homegrown" solutions developed at many government and educational institutions. TCP/IP made it possible to connect diverse architectures such as UNIX workstations, VMS minicomputers, and Cray supercomputers into a single operational network. TCP/IP soon superseded proprietary protocols such as Xerox Network Systems (XNS), ChaosNet, and DECnet. It has since become the de facto standard for internetworking all types of computing systems.

CERN was primarily a research center for high-energy particle physics, but it became an early European pioneer of TCP/IP and by 1990 was the largest subnetwork of the Internet in Europe. In 1989 a CERN researcher named Tim Berners-Lee developed the Hypertext Transfer Protocol (HTTP) that formed the basis of the World Wide Web (WWW). And all of this developed as a sidebar to the real research that was being done at CERN--slamming together protons and electrons at high speeds to see what fragments would appear!

Also important to the development of Internet technologies and protocols was the introduction of the Domain Name System (DNS) in 1984. At that time, ARPANET had more than 1000 nodes and trying to remember their numerical IP addresses was a headache. DNS greatly simplified that process. Two other Internet protocols were introduced soon afterwards: NNTP was developed in 1987, and Internet Relay Chat (IRC) was developed in 1988.

Other systems paralleling ARPANET were developed in the early 1980s, including the research-oriented Computer Science NETwork (CSNET), and the Because It's Time NETwork (BITNET), which connected IBM mainframe computers throughout the educational community and provided e-mail services. Gateways were set up in 1983 to connect CSNET to ARPANET, and BITNET was similarly connected to ARPANET. In 1989, BITNET and CSNET merged into the Corporation for Research and Educational Networking (CREN).

In 1986 the National Science Foundation NETwork (NSFNET) was created. NSFNET networked the five national supercomputing centers together using dedicated 56-Kbps lines. The connection was soon seen as inadequate and was upgraded to 1.544-Mbps T1 lines in 1988. In 1987, NSF and Merit Networks agreed to jointly manage the NSFNET, which had effectively become the backbone of the emerging Internet. By 1989 the Internet had grown to more than 100,000 hosts, and the Internet Engineering Task Force (IETF) was officially created to administer its development. In 1990, NSFNET officially replaced the aging ARPANET and the modern Internet was born, with more than 20 countries connected.

Cisco Systems was one of the first companies in the 1980s to develop and market routers for Internet Protocol (IP) internetworks, a business that today is worth billions of dollars and is a foundation of the Internet. Hewlett-Packard was Cisco's first customer for its routers, which were originally called gateways.

In wireless telecommunications, analog cellular was implemented in Norway and Sweden in 1981. Systems were soon rolled out in France, Germany, and the United Kingdom. The first U.S. commercial cellular phone system, which was named the Advanced Mobile Phone Service (AMPS) and operated in the 800-MHz frequency band, was introduced in 1983. By 1987 the United States had more than 1 million AMPS cellular subscribers, and higher-capacity digital cellular phone technologies were being developed. The Telecommunications Industry Association (TIA) soon developed specifications and standards for digital cellular communication technologies.

A landmark event that was largely responsible for the phenomenal growth in the PC industry (and hence the growth of the client/server model and local area networking) was the release of the first version of Microsoft's text-based, 16-bit MS-DOS operating system in 1981. Microsoft, which had become a privately held corporation with Bill Gates as president and chairman of the board and Paul Allen as executive vice president, licensed MS-DOS version 1 to IBM for its PC. MS-
DOS continued to evolve and grow in power and usability until its final version, MS-DOS 6.22, which was released in 1993. (I still carry around a DOS boot disk wherever I go in case I need it--don't you?) Anyway, one year after the first version of MS- DOS was released in 1981, Microsoft had its own fully functional corporate network, the Microsoft Local Area Network (MILAN), which linked a DEC 206, two PDP-11/70s, a VAX 11/250, and a number of MC68000 machines running XENIX. This type of setup was typical of the heterogeneous computer networks that characterized the early 1980s.

In 1983, Microsoft unveiled its strategy to develop a new operating system called Windows with a graphical user interface (GUI). Version 1 of Windows, which shipped in 1985, used a system of tiled windows that allowed users to work with several applications simultaneously by switching between them. Version 2 was released in 1987 and supported overlapping windows and support for expanded memory.

Microsoft launched its SQL Server relational database server software for LANs in 1988. In its current version, SQL Server 2000 is an enterprise-class application that competes with other major database platforms such as Oracle and DB2. IBM and Microsoft jointly released their 32-bit OS/2 operating system in 1987 and released OS/2 1.1 with Presentation Manager a year later.

In miscellaneous developments, IBM researchers developed the Reduced Instruction Set Computing (RISC) processor architecture in 1980. Apple Computer introduced its Macintosh computing platform in 1984 (the successor of its Lisa system), which introduced a windows-based GUI that was the precursor to Windows. Apple also introduced the 3.5-inch floppy disk in 1984. Sony Corporation and Philips developed CD-ROM technology in 1985. (Recordable CD-R technologies were developed in 1991.) IBM released its AS/400 midrange computing system in 1988, which continues to be popular to this day.

1970s birth to Ethernet

Although the 1960s were the decade of the mainframe, the 1970s gave birth to Ethernet, which today is by far the most popular LAN technology. Ethernet was born in 1973 in Xerox Corporation's research lab in Palo Alto, California. (An earlier experimental network called ALOHAnet was developed in 1970 at the University of Hawaii.) The original Xerox networking system was known as X-wire and worked at 2.94 Mbps. X-wire was experimental and was not used commercially, although a number of Xerox Palo Alto workstations used for word processing were networked together in the White House using X-wire during the Carter administration. In 1979, Digital Equipment Corporation (DEC), Intel, and Xerox formed the DIX consortium and developed the specification for standard 10-Mbps Ethernet, or thicknet, which was published in 1980. This standard was revised and additional features were added in the following decade.

The conversion of the backbone of the Bell telephone system to digital circuitry continued during the 1970s and included the deployment in 1974 of the first digital data service (DDS) circuits (then called the Dataphone Digital Service). DDS formed the basis of the later deployment of ISDN and T1 lines to customer premises, and AT&T installed its first digital switch in 1976.

In wide area networking, a new telecommunications service called X.25 was deployed toward the end of the decade. This new system was packet-switched, in contrast to the circuit-switched PSTN, and later evolved into public X.25 networks such as GTE's Telenet Public Packet Distribution Network (PDN), which later became SprintNet. X.25 was widely deployed in Europe, where it still maintains a large installed base, especially for communications in the banking and financial industry.

In 1970 the Federal Communications Commission (FCC) announced the regulation of the fledgling cable television industry. Cable TV remained primarily a broadcast technology for delivering entertainment to residential homes until the mid-1990s, when technologies began to be developed to enable it to carry broadband services to residential subscribers. Cable modems now compete strongly with Digital Subscriber Line (DSL) as the main two forms of broadband Internet access technologies.

Despite all these technological advances, however, telecommunications services in the 1970s remained unintegrated, with voice, data, and entertainment carried on different media. Voice was carried by telephone, which was still analog at the customer premises; entertainment was broadcast using radio and television technologies; and data was usually carried over RS-232 or Binary Synchronous Communication (BSC) serial connections between dumb terminals and mainframes (or, for remote terminals, long-haul modem connections over analog telephone lines).

The 1970s were also notable for the growth of ARPANET, which grew throughout the decade as additional hosts were added at various universities and government institutions. By 1971 the network had 19 nodes, mostly consisting of a mix of PDP-8, PDP-11, IBM
S/360, DEC-10, Honeywell, and other mainframe and minicomputer systems linked together. The initial design of ARPANET called for a maximum of 265 nodes, which seemed like a distant target in the early 1970s. The initial protocol used on this network was NCP, but this was replaced in 1982 by the more powerful TCP/IP protocol suite. In 1975 the administration of ARPANET came under the authority of the Defense Communications Agency.

ARPANET protocols and technologies continued to evolve using the informal RFC process developed in 1969. In 1972 the Telnet protocol was defined in RFC 318, followed by FTP in 1973 (RFC 454). ARPANET became an international network in 1973 when nodes were added at the University College of London in the United Kingdom and at the Royal Radar Establishment in Norway. ARPANET even established an experimental wireless packet-switching radio service in 1977, which two years later became the Packet Radio Network (PRNET).

Meanwhile, in 1974 the first specification for the Transmission Control Protocol (TCP) was published. Progress on the TCP/IP protocols continued through several iterations until the basic TCP/IP architecture was formalized in 1978, but it was not until 1983 that ARPANET started using TCP/IP instead of NCP as its primary networking protocol.

The year 1977 also saw the development of UNIX to UNIX Copy (UUCP), a protocol and tool for sending messages and transferring files on UNIX-based networks. An early version of the USENET news system using UUCP was developed in 1979. (The Network News Transfer Protocol [NNTP] came much later, in 1987.)

In 1979 the first commercial cellular phone system began operation in Japan. This system was analog in nature, used the 800-MHz and 900-MHz frequency bands, and was based on a concept developed in 1947 at Bell Laboratories.

An important standard to emerge in the 1970s was the public-key cryptography scheme developed in 1976 by Whitfield Diffie and Martin Hellman. This scheme underlies the Secure Sockets Layer (SSL) protocol developed by Netscape Communications, which is still the predominant approach for ensuring privacy and integrity of financial and other transactions over the World Wide Web (WWW). Without SSL, popular e-business sites such as Amazon and eBay would have a hard time attracting customers!

Among other miscellaneous developments during this decade, in 1970 IBM researchers invented the relational database, a set of conceptual technologies that has become the foundation of today's distributed application environments. In 1971, IBM demonstrated the first speech recognition technologies--which have since led to those annoying automated call handling systems found in customer service centers! IBM also developed the concept of the virtual machine in 1972 and created the first sealed disk drive (the Winchester) in 1973. In 1974, IBM introduced the Systems Networking Architecture (SNA) for networking its mainframe computing environment. In 1971, Intel released its first microprocessor, a 4-bit processor called the 4004 that ran at a clock speed of 108 kilohertz (kHz), a snail's pace by modern standards but a major development at the time. Another significant event was the launching of the online service CompuServe in 1979, which led to the development of the first online communities.

The first personal computer, the Altair, went on the market as a kit in 1975. The Altair was based on the Intel 8080, an 8-bit processor, and came with 256 bytes of memory, toggle switches, and light-emitting diode (LED) lights. Although the Altair was basically for hobbyists, the Apple II from Apple Computer, which was introduced in 1977, was much more. A typical Apple II system, which was based on the Motorola 6502 8-bit processor, had 4 KB of RAM, a keyboard, a motherboard with expansion slots, built-in BASIC in ROM, and color graphics. The Apple II quickly became the standard desktop system in schools and other educational institutions. A physics classroom I taught in had one all the way into the early 1990s (limited budget!). However, it was not until the introduction of the IBM Personal Computer (PC) in 1981 that the full potential of personal computers began to be realized, especially in businesses.

In 1975, Bill Gates and Paul Allen licensed their BASIC computer programming language to MITS, the Altair's manufacturer. BASIC was the first computer language specifically written for a personal computer. Gates and Allen coined the name "Microsoft" for their business partnership, and they officially registered it as a trademark the following year. Microsoft Corporation went on to license BASIC to other personal computing platforms such as the Commodore PET and the TRS-80. I loved BASIC in those early days, and I still do!

1960s computer networking

In the 1960s computer networking was essentially synonymous with mainframe computing, and the distinction between local and wide area networks did not yet exist. Mainframes were typically "networked" to a series of dumb terminals with serial connections running on RS-232 or some other electrical interface. If a terminal in one city needed to connect with a mainframe in another city, a 300-baud long-haul modem would use the existing analog Public Switched Telephone Network (PSTN) to form the connection. The technology was primitive indeed, but it was an exciting time nevertheless. I remember taking a computer science class in high school toward the end of the decade, and having to take my box of punch cards down to the mainframe terminal at the university and wait in line for the output from the line printer. Alas, poor Fortran, I knew thee well!

To continue the story, the quality and reliability of the PSTN increased significantly in 1962 with the introduction of pulse code modulation (PCM), which converted analog voice signals into digital sequences of bits. A consequent development was the first commercial touch-tone phone, which was introduced in 1962. Before long, digital phone technology became the norm, and DS-0 (Digital Signal Zero) was chosen as the basic 64-kilobit-per-second (Kbps) channel upon which the entire hierarchy of the digital telephone system was built. A later development was a device called a channel bank, which took 24 separate DS-0 channels and combined them together using time-division multiplexing (TDM) into a single 1.544-Mbps channel called DS-1 or T1. (In Europe, 30 DS-0 channels were combined to make E1.) When the backbone of the Bell telephone system finally became fully digital years later, the transmission characteristics improved significantly for both voice and data transmission due to higher quality and less noise associated with Integrated Services Digital Network (ISDN) digital lines, though local loops have remained analog in many places. But that is getting a little ahead of the story.

The first communication satellite, Telstar, was launched in 1962. This technology did not immediately affect the networking world because of the high latency of satellite links compared to undersea cable communications, but it eventually surpassed transoceanic underwater telephone cables (which were first deployed in 1965 and could carry 130 simultaneous conversations) in carrying capacity. In fact, early in 1960 scientists at Bell Laboratories transmitted a communication signal coast-to-coast across the United States by bouncing it off the moon! By 1965 popular commercial communication satellites such as Early Bird were being widely deployed and used.

As an interesting aside, in 1961 the Bell system proposed a new telecommunications service called TELPAK, which it claimed would lead to an "electronic highway" for communication, but it never pursued the idea. Could this have been an early portent of the "information superhighway" of the mid-1990s?

The year 1969 witnessed an event whose full significance was not realized until more than two decades later: namely, the development of the ARPANET packet-switching network. ARPANET was a project of the U.S. Department of Defense's Advanced Research Projects Agency (ARPA), which became DARPA in 1972. Similar efforts were underway in France and the United Kingdom, but it was the U.S. project that eventually evolved into the present-day Internet. (France's MINTEL packet-switching system, which was based on the X.25 protocol and which aimed to bring data networking into every home, did take off in 1984 when the French government started giving away MINTEL terminals; by the early 1990s, more than 20 percent of the country's population was using it.) The original ARPANET network connected computers at Stanford University, the University of California at Los Angeles (UCLA), the University of California at Santa Barbara (UCSB), and the University of Utah, with the first node being installed at UCLA's Network Measurement Center. A year later, Harvard University, the Massachusetts Institute of Technology (MIT), and a few other prominent institutions were added to the network, but few of those involved could imagine that this technical experiment would someday profoundly affect modern society and the way we do business.

The year 1969 also saw the publication of the first Request For Comments (RFC) document, which specified the Network Control Protocol (NCP), the first transport protocol of ARPANET. The informal RFC process evolved into the primary means of directing the evolution of the Internet and is still used today.

That same year, Bell Laboratories developed the UNIX operating system, a multitasking, multiuser network operating system (NOS) that became popular in academic computing environments in the 1970s. A typical UNIX system in 1974 was a PDP-11 minicomputer with dumb terminals attached. In a configuration with 768 kilobytes (KB) of magnetic core memory and a couple of 200-megabyte (MB) hard disks, the cost of such a system would have been around $40,000. I remember working in those days on a PDP-11 in the cyclotron lab of my university's physics department, feeding in bits of punched tape and watching lights flash. It was an incredible experience.

Many important standards for computer systems also evolved during the 1960s. In 1962, IBM introduced the first 8-bit character encoding system, called Extended Binary-Coded Decimal Interchange Code (EBCDIC). A year later the competing American Standard Code
for Information Interchange (ASCII) was introduced. ASCII ultimately won out over EBCDIC even though EBCDIC was 8-bit and ASCII was only 7-bit. The American National Standards Institute (ANSI) formally standardized ASCII in 1968. ASCII was first used in serial transmission between mainframe hosts and dumb terminals in mainframe computing environments, but it was eventually extended to all areas of computer and networking technologies.

Other developments in the 1960s included the development in 1964 of IBM's powerful System/360 mainframe computing environment, which was widely implemented in government, university, and corporate computing centers. In 1966, IBM introduced the first disk storage system, which employed 50 metal platters, each of which was 2 feet (0.6 meter) wide and had a storage capacity of 5 MB. IBM created the first floppy disk in 1967. In 1969, Intel Corporation released a RAM chip that stored 1 KB of information, which at the time was an amazing feat of engineering.