Mobile Wireless Communications

Global System for Mobile Communication (GSM). Mobile Wireless Market: Technology Forecasts

Everything is converging. The wired world and the wireless world are converging. The Internet and mobile wireless is converging. The distinction between the wireless, wireline and the Internet service providers is beginning to blur. And the glue certainly is "mobile wireless".

Mobile wireless has exploded in popularity because of the fact that it simplifies and revolutionizes communication. The market for mobile wireless is increasing by leaps and bounds. The success of mobile communications lies in the ability to provide instant connectivity anytime and anywhere and the ability to provide high-speed data services to the mobile user. The quality and speeds available in the mobile environment must match the fixed networks if the convergence of the mobile wireless and fixed communication networks is to happen in the real sense. So, the challenges for the mobile networks lie in providing a very large footprint of mobile services (to make the movement from one network to another as transparent to the user as possible) and the availability of high speed reliable data services along with high quality voice. A range of successful mobile technologies exists today in various parts of the world and every technology must evolve to fulfill all these requirements. In the following sections I'll talk about the mobile technologies existing today, how these technologies compare, how these technologies are shaping up and what we can expect to see in the near future.

The mobile wireless market is predominantly voice-oriented with low speed data services. The popularity of mobile voice services has been the deciding factor for the development of mobile networks so far. Data, mainly in the form of SMS has basically been an extra service. However, SMS is fast becoming very popular and in many European countries subscribers are spending more on SMS than voice. Both the voice and data markets continue to grow and the 2nd Generation networks are evolving to keep up and, in fact, are generating demands for newer services. Although digital technologies have improved the quality of service provided by the mobile networks, the voice quality is still not the same as the toll quality. New speech coding techniques like EFR and adaptive multi-rate are bridging this gap.

Technologies And Services Existing Today

Many second-generation mobile technologies exist today each having influence in specific parts of the world. GSM, TDMA (IS 136), and CDMA (IS 95) are the main technologies in the second-generation mobile market. GSM by far has been the most successful standard in terms of it's coverage. All these systems have different features and capabilities. Although both GSM and TDMA based networks use time division multiplexing on the air interfaces, their channel sizes, structures and core networks are different. CDMA has an entirely different air interface.

In the following sections I will discuss the existing standards, the technologies, the situation today and also talk about some of the forecasts for these technologies. This should help you understand the situation today and also dispel some of the basic misconceptions about the viability of these technologies.

Global System for Mobile Communication (GSM)

GSM's air interface is based on narrowband TDMA technology, where available frequency bands are divided into time slots, with each user having access to one time slot at regular intervals. Narrow band TDMA allows eight simultaneous communications on a single 200Khz carrier and is designed to support 16 half-rate channels. The fundamental unit of time in this TDMA scheme is called a burst period and it lasts 15/26 ms (or approx. 0.577 ms). Eight burst periods are grouped into a TDMA frame (120/26 ms, or approx. 4.615 ms), which forms the basic unit for the definition of logical channels. One physical channel is one burst period per TDMA frame. A GSM mobile can seamlessly roam nationally and internationally, which requires that registration, authentication, call routing and location updating functions exist and be standardized in GSM networks.

GSM offers a variety of data services. GSM users can send and receive data, at rates up to 9600 bps, to users on POTS (Plain Old Telephone Service), ISDN, Packet Switched Public Data Networks, and Circuit Switched Public Data Networks using a variety of access methods and protocols, such as X.25 or X.32. Other data services include Group 3 facsimile, as described in ITU-T recommendation T.30, which is supported by use of an appropriate fax adapter. A unique feature of GSM, not found in older analog systems, is the Short Message Service (SMS). SMS is a bi-directional service for short alphanumeric (up to 160 bytes) messages. Messages are transported in a store-and-forward fashion. For point-to-point SMS, a message can be sent to another subscriber to the service, and an acknowledgment of receipt is provided to the sender. SMS can also be used in a cell-broadcast mode, for sending messages such as traffic updates or news updates. Messages can also be stored in the SIM card for later retrieval.

The European version of GSM operates at the 900 MHz frequency (and now at the newer 1800 MHz frequency). Since the North American version of GSM operates at the 1900 MHz frequency, the phones are not interoperable, but the SIMs are. Dual-band 900 -1800 and 900 -1900 phones are already released and in production. Tri-band 900 -1800 -1900 GSM phone are expected to be manufactured in the next few years, which will allow interoperability between Europe and North America.

A GSM network consists of mobile stations talking to the base transceiver station, on the Um interface. Many BTS are connected to a BSC via the Abis interface and the BSC connect to the MSC (The core switching network) via the A interface.

GSM network

HLR and VLR provide customized subscriber services and allow seamless movement from one cell to another. The Authentication register and the equipment register provide security and authentication. An OMC and a cell broadcast center allow configuration of the network and provide the cell broadcast service in the GSM network (not shown in the diagram).The voice transmitted on the air interface can be encrypted. The speech is coded at 13kbps over the air interface. Using EFR (Enhanced Fullrate Coding) the voice quality approaches the land line quality. Recent developments like AMR (adaptive multi-rate coding) allow speech coding and channel coding to be dynamically adjusted giving acceptable performance even in case of bad radio conditions. The GSM network supports automatic handovers. Since the mobiles are not transmitting or receiving at all times battery consumption can be conserved. Further using DTX and DRX (Discontinuous transmission and reception, mobile transmits or receives only when there is a voice activity detection) batter power can be conserved even more - a highly desirable characteristic of any mobile system. Also since the mobile is not transmitting or receiving at all times, this allows the mobile to listen to control channels and to provide useful information about other channels back to the cell.

Recent developments and initiatives include:

GSM Association together with the Universal Wireless Communications Consortium (UWCC), which represents the interests of the TDMA community, are working towards inter-standard roaming between GSM and TDMA (ANSI-136) networks.

The majority of European GSM operators plan to implement general packet radio system (GPRS) technology as their network evolution path to third-generation.

MExE will allow operators to provide customized, user-friendly interfaces to a host of services from GSM, through GPRS and eventually UMTS. The first implementations of MExE are expected to support the wireless application protocol (WAP) and Java applications. MExE can extend the capabilities that currently exist within WAP by enabling a more flexible user- interface, more powerful features and security.

GSM cordless telephony system to provide a small home base station to work with a standard GSM mobile phone in similar mode to a cordless phone. The base station would be connected to the PSTN.

Number portability will allow customers to retain their mobile numbers when they change operators or service providers.

Location services to standardize the methods for determining a GSM subscriber's physical location.

Tandem free operation where the compressed speech is passed unchanged over the 64 kbps links between the transcoders, hence improving the voice quality.

Mobile Wireless Communications Tomorrow

The drive for 3G is the need for higher capacities and higher data rates. This article takes an in-depth look at 3G (and beyond) technologies such as GPRS and EDGE.

A Look At GPRS, HCSD, and EDGE

3rd Generation Wireless, or 3G, is the generic term used for the next generation of mobile communications systems. 3G systems aim to provide enhanced voice, text and data services to user. The main benefit of the 3G technologies will be substantially enhanced capacity, quality and data rates than are currently available. This will enable the provision of advanced services transparently to the end user (irrespective of the underlying network and technology, by means of seamless roaming between different networks) and will bridge the gap between the wireless world and the computing/Internet world, making inter-operation apparently seamless. The third generation networks should be in a position to support real-time video, high-speed multimedia and mobile Internet access. All this should be possible by means of highly evolved air interfaces, packet core networks, and increased availability of spectrum. Although ability to provide high-speed data is one of the key features of third generation networks, the real strength of these networks will be providing enhanced capacity for high quality voice services. The need for landline quality voice capacity is increasing more rapidly than the current 2nd generation networks will be able to support. High data capacities will open new revenue sources for the operators and bring the Internet more closer to the mobile customer. The use of all-ATM or all-IP based communications between the network elements will also bring down the operational costs of handling both voice and data, in addition to adding flexibility.

On The Way To 3G

As reflected in the introduction above, the drive for 3G is the need for higher capacities and higher data rates. Whereas higher capacities can basically be obtained by having a greater chunk of spectrum or by using new evolved air interfaces, the data requirements can be served to a certain extent by overlaying 2.5G technologies on the existing networks. In many cases it is possible to provide higher speed packet data by adding few network elements and a software upgrade.

A Look At GPRS, HCSD, and EDGE

Technologies like GPRS (General Packet Radio Service), High Speed Circuit Switched Data (HSCSD) and EDGE fulfill the requirements for packet data service and increased data rates in the existing GSM/TDMA networks. I'll talk about EDGE separately under the section "Migration To 3G". GPRS is actually an overlay over the existing GSM network, providing packet data sevices using the same air interface by the addition of two new network elements, the SGSN and GGSN, and a software upgrade. Although GPRS was basically designed for GSM networks, the IS-136 Time Division Multiple Access (TDMA) standard, popular in North and South America, will also support GPRS. This follows an agreement to follow the same evolution path towards third generation mobile phone networks concluded in early 1999 by the industry associations that support these two network types.

The General Packet Radio Service (GPRS)

The General Packet Radio Service (GPRS) is a wireless service that is designed to provide a foundation for a number of data services based on packet transmission. Customers will only be charged for the communication resources they use. The operator's most valuable resource, the radio spectrum, can be leveraged over multiple users simultaneously because it can support many more data users. Additionally more than one time slots can be used by a user to get higher data rates.

GPRS introduces two new major network nodes in the GSM PLMN:

  • Serving GPRS Support Node (SGSN)
    • The SGSN is the same hierarchical level as an MSC. The SGSN tracks packet capable mobile locations, performs security functions and access control. The SGSN is connected to the BSS via Frame Relay.
  • Gateway GPRS Support Node (GGSN)
    • The GGSN interfaces with external packet data networks (PDNs) to provide the routing destination for data to be delivered to the MS and to send mobile originated data to its intended destination. The GGSN is designed to provide inter-working with external packet switched networks, and is connected with SGSNs via an IP based GPRS backbone network.

A packet control unit is also required which may be placed at the BTS or at the BSC. A number of new interfaces have been defined between the existing network elements and the new elements and between the new network elements. Theoretical maximum speeds of up to 171.2 kilobits per second (kbps) are achievable with GPRS using all eight timeslots at the same time. This is about three times as fast as the data transmission speeds possible over today's fixed telecommunications networks and ten times as fast as current Circuit Switched Data services on GSM networks. Actually we may not see speeds greater than 64 kbps however it would be much higher than the speeds possible in any 2G network. Also, another advantage is the fact that the user is always connected and is charged only for the amount of data transferred and not for the time he is connected to the network.

Packet switching means that GPRS radio resources are used only when users are actually sending or receiving data. Rather than dedicating a radio channel to a mobile data user for a fixed period of time, the available radio resource can be concurrently shared between several users. This efficient use of scarce radio resources means that large numbers of GPRS users can potentially share the same bandwidth and be served from a single cell. The actual number of users supported depends on the application being used and how much data is being transferred. Because of the spectrum efficiency of GPRS, there is less need to build in idle capacity that is only used in peak hours.

Already many field trials and also some commercial GPRS implementations have taken place. GPRS is the evolution step that almost all GSM operators are considering. Also, coupled with other technologies like WAP, GPRS can act as a stepping stone towards convergence of cellular service providers and the internet service providers.

HSCSD (High speed circuit swiched data) is the evolution of circuit switched data within the GSM environment. HSCSD will enable the transmission of data over a GSM link at speeds of up to 57.6kbit/s. This is achieved by cocatenating, i.e. adding together, consecutive GSM timeslots, each of which is capable of supporting 14.4kbit/s. Up to four GSM timeslots are needed for the transmission of HSCSD. This allows theoretical speeds of up to 57.6 kbps. This is broadly equivalent to providing the same transmission rate as that available over one ISDN B-Channel. HSCSD is part of the planned evolution of the GSM specification and is included in the GSM Phase 2 development. In using HSCSD a permanent connection is established between the called and calling parties for the exchange of data. As it is circuit switched, HSCSD is more suited to applications such as video conferencing and multimedia than 'bursty' type applications such as email, which is more suited to packet switched data. In networks where High Speed Circuit Switched Data (HSCSD) is deployed, GPRS may only be assigned third priority, after voice as number one priority and HSCSD as number two. In theory, HSCSD can be preempted by voice calls- such that HSCSD calls can be reduced to one channel if voice calls are seeking to occupy these channels. HSCSD does not disrupt voice service availability, but it does affect GPRS. Even given preemption, it is difficult to see how HSCSD can be deployed in busy networks and still confer an agreeable user experience, i.e. continuously high data rate. HSCSD is therefore more likely to be deployed in start up networks or those with plenty of spare capacity since it is relatively inexpensive to deploy and can turn some spare channels into revenue streams.

An advantage for HSCSD could be the fact that while GPRS is complementary for communicating with other packet-based networks such as the Internet, HSCSD could be the best way of communicating with other circuit switched communications media such as the PSTN and ISDN. But one potential technical difficulty with High Speed Circuit Switched Data (HSCSD) arises because in a multi-timeslot environment, dynamic call transfer between different cells on a mobile network (called "handover") is complicated unless the same slots are available end-to-end throughout the duration of the Circuit Switched Data call.

Because of the way these technologies are evolving, the market need for high speed circuit switched data and the market response to GPRS, the mobile infrastructure vendors are not as committed to High Speed Circuit Switched Data (HSCSD) as they are to General Packet Radio Service (GPRS). So, we may only see HSCSD in isolated networks around the world. HSCSD may be used by operators with enough capacity to offer it at lower prices, such as Orange. [1] believes that every GSM operator in Europe will deploy GPRS, and by 2005 GPRS users will almost match the number of voice only users. Right now there are 300 million wireless phones in the world. By 2005 we expect one billion.

A quick look at the table below would help you appreciate and understand clearly the technology characterizations as 2nd generation, 2.5 generation and 3G. We have looked into 2G and some 2.5G technologies.

2G and 3G mobile communications

author: anonymous
Wireless Communications
Mobile Wireless Communications Today Report

1G - 1st Generation mobile communications

With the creation of the micro-processor and digitization of control links between mobile phones and cell sites in the 1970’s, the first generation of cellular standards (1G) was developed around Analog technology. In the early eighties, Europe was concerned with the issue that multiple standards were being developed across countries, but with no conformity. Compared to the United States that concentrated on standards development within its own boundaries, Europe took on a different strategic approach of focusing on unification of its mobile growth efforts.

Nordic Mobile Telephone System (NMTS450):

In 1981, the first multinational cellular system was developed within the Scandinavian countries of Denmark Sweden, Finland, and Norway. Europe was already dealing with nine incompatible analog communication systems that were causing roaming difficulties across borders. With strong government backing, the Scandinavian countries witnessed early success with its NMT standard. This put the region in the forefront of cellular standards development and led Europe to focus on developing the next generation of cellular standards using digital instead of analog.

Advanced Mobile Phone Services (AMPS):

On October 12, 1983, the Regional Bell Operating Company, Ameritech started the first American commercial cellular service in Chicago based on AMPS standards. AMPS was developed along a higher 800MHz frequency and used FDMA (Frequency Division Multiple Access) technology for transferring information.

During this period, the United States had little concerns over roaming issues and standards over North American boundaries compared to what Europe had gone through in its own region. With the completion of the Bell System divestiture around 1984, new competition and new products escalated in the telecommunications industry. However, the United States had already developed a strong system of landlines and had little economic incentive to focus its development beyond Analog standards during this time. Also with little frequency available, the FCC’s heavy regulation of the airwaves would prove to be a hindrance in the progression of U.S. Cellular networks and technology compared to Europe.

Total Access Communications Systems (TACS):

Around 1985, the United Kingdom had developed TACS as a new national standard for its own region. TACS was based upon the AMPS standards of using FDMA analog technology and was within the 800 MHz frequency range.

The Growth of Mobile Standards and Cellular Functionality

By Scott Pearson

4/14/05

Professor Carey
New Mass Media
Spring 2005

Voice-Over-Internet Protocol (VoIP)

VoIP (Voice-Over-Internet Protocol) - also known as Internet Protocol telephony (IP telephony) - is becoming a key driver in the evolution of voice communications.

Voice Over Internet Protocol (VoIP) is a telephony technology used to transmit ordinary telephone calls over the Internet. VoIP takes analogue audio signals and turns them into digital signals (packets) that are transmitted using Internet Protocol (IP) networks. VoIP's advantages include low cost, flexibility, and mobility. Conversely, VoIP's disadvantages include sound quality such as latency (delay), jitter, and packet loss. VoIP has a number of cultural, social, and regulatory impacts that solution providers must consider when marketing their services.

VoIP is a relatively new technology useful not only for phones but also as a broad application platform that enables voice interactions on devices such as desktop computers, mobile devices, set-top boxes, gateways, and many devices with applications specific to certain businesses where voice communication is an important feature.

VoIP - Technology Overview

VoIP is a new form of communication that takes analogue audio signals and turns them into digital signals, or packets. This is an innovative alternate to the traditional circuit-switched method of telecommunication, where a dedicated circuit between two parties is maintained. In order to set up a traditional phone call between two telephones, the switched and the intervening network establish a dedicated route from one end of the call to the other. Conversely, VoIP uses a packet-switched method where audio signals are converted into digital data at the originating end, which is then transmitted over the Internet and converted back to analog signal at the receiving end. In other words, VoIP digitizes voice, inserts the digitized data into discrete packets, and sends them over the IP network. The packets have a destination address, but no fixed path through the network. The packets arrive at the address, where they are put back together and converted back to analog audio signals. VoIP integrates voice and data communications and turns any Internet connection into a phone call. VoIP is a revolutionary technology that has the potential to drastically change the way people communicate and talk on the phone around the world.

Advantages of VoIP

VoIP has a number of advantages. First, the major advantage is related to cost. VoIP has tremendous savings potential for anyone who communicates over long distance. For instance in a corporate environment, the cost for leased lines no longer exist if phone calls are transmitted over the Internet. In addition, if an employee moved to a different location within the same office using a regular phone system like the Public Switched Telephone Network (PSTN), fees would be incurred in that move. To make such a move with a VoIP system, however, employees would only need to plug their IP phones into their office's Ethernet jack, with no extra cost. A digital private branch exchange (PBX)1 system already provides for the integration of traditional telephone services and thereby eliminates the aforementioned costs of moving. Secondly, flexibility and mobility are also utilized with the adoption of VoIP. Individuals can simply plug in their laptops, access the Internet, and talk on the phone, potentially improving organizational communication and customer service. In terms of customer service, VoIP has some advantages. Using VoIP's capabilities with different enterprise software, when a customer calls they can be routed easily to whom they want to talk, and the service representative will have easy access to pertinent information that will improve customer service.

Disadvantages of VoIP

VoIP also has its share of disadvantages when compared to the functionality of PSTN. A major disadvantage of VoIP is that it is a new technology. As a result, the long-term benefits and risks are not yet known. These risks include unknown service life of hardware and infrastructure, and details surrounding reliability and quality. The factors that affect the sound quality during transmission include latency (or delay), jitter, and packet loss.2 In terms of latency, human ears can withstand a delay of 150-250 ms and not be able to notice the delay. The PSTN meets this standard with a nominal delay of 150; however, VoIP cannot meet this standard of delay for a consistent period of time.3 Jitter is defined as the variability in packet arrival at the destination.4 Voice packets are transmitted over the same IP network as normal data packets and therefore voice packets have to compete for bandwidth with data packets. When a situation arises whereby there is a burst of network traffic (mostly in the form of data packets) voice packets arrive at sporadic times to the destination. The consequence of sporadic arrival time of the packets is sound distortion at the receiver's end - jitter. Lastly, the issue of packet loss occurs when voice packets that are transmitted over the network do not arrive at the destination. Along the same lines as the causes of jitter, the IP network is to blame for this drawback because it does not guarantee delivery of any packets (data or voice). The consequence of packet loss is distortion at the receivers end as sounds and words may actually never reach the receiver.

With regards to availability, VoIP must meet the "five nines" availability demanded of phone services (i.e. VoIP must be available at least 99.999% of the time). A common misconception is that VoIP will have lower dependability and availability than standard PSTN systems because of power failures, internet service provider "down-time", security issues, etc. Nevertheless, it is has been demonstrated that it is possible to build VoIP systems that are more reliable than circuit based PSTN platforms. Adaptive routing ensures that packets reach their designation using multiple network lines.

Overall, the disadvantages of VoIP are not significant enough to hamper its ability to compete with traditional PSTN. In addition, advances are being made for the technology to get over some of these stumbling blocks. For example, the problem of jitter has been shown to decrease by using specialized gateways that determine whether large network data bursts are currently affecting throughput and the gateway adjusts to decrease jitter. The technology has matured to a state where major players are now offering VoIP solutions as alternatives to traditional telecommunication solutions.

Mobile VoIP

Even when restricted to wired computer networks, VoIP introduces mobility as provided by 'mobile IP'. A 'home agent' serving your home network could be informed where you are and request, at call set-up time, that voice streams be sent to and be received from your temporary IP address or a 'foreign agent' serving your interests.

Mobile VoIP has also come to mean the application of VoIP technology and protocols to battery powered wireless enabled mobile devices including handsets. Most commonly this means VoIP over WiFi referred to sometimes as VoWiFi or VoWLAN. Since IEEE802.11 WLANs provide physical and data link layers suitable for supporting IP and the transport layer protocols UDP and TCP, it may be considered that there is no particular problem in supporting VoWiFi. Indeed many people now routinely use VoIP services provided by Skype, MSN and others over WiFi links at home and in the office. The same H323 or SIP protocols as would be used on a desk-top computer must clearly be expected to work on WiFi connected devices.

VoIP and the Future of Satellite Communications

You may remember a few years ago when everyone first started talking seriously about voice over IP (VoIP). Several companies claimed that the world was in for another technological revolution and that paradigms were going to start shifting dramatically. Now, that market is finally beginning to materialize, and VoIP is poised, once again, to take the communications world by storm.

AT&T plans to make available a consumer VoIP offering for customers in the top 100 markets during the first quarter of 2004, and Verizon has similar VoIP plans for the second quarter. Vonage already has been delivering consumer and business VoIP services successfully and continues to gain subscribers on a regular basis. In the satellite communications industry, VoIP has been quietly but widely available in some format for the past two years.

For the remote ocean communications sector of the satellite communications industry, the news about VoIP and IP communication has traveled slowly. Most ship-to-ship and ship-to-shore (or remote rig-to-shore) communication has operated unchanged for several years. That’s not all that surprising. For workers focused on drilling efficiencies, manual labor or machinery maintenance, communications services always have been a secondary concern. But the Internet and data transmission have become much more important in recent years. And workers require better communications for business and personal reasons.

Offshore workers need to transmit real time operating data to the central office on the mainland, participate in video conferences, dial-in to make reports during important meetings, read stories from the online version of the hometown newspaper, see the digital photos of baby’s first birthday celebration, and make sure they don’t get sniped at the last minute during eBay auctions. For those reasons and many others, traditional remote satellite communications have finally come under pressure to catch up with the wired world.

Frame relay is the current legacy network for people in the drilling world, but it doesn’t offer the flexibility companies have come to expect from their mainland communications. IP via satellite not only provides that flexibility, but also, it offers a boost in reliability. IP is more successful at re-delivering corrupted transmissions than frame relay. The new IP networks use Ethernet to create a virtual private network (VPN) that operates with multi-protocol label switching (MPLS) over satellite. MPLS combines the connection-oriented privacy and quality of service of traditional frame relay or ATM networks with the simple and efficient connectivity of IP.

Remote provisioning is one of the most exciting elements of IP communication via satellite. Services and applications can be added and/or subtracted with ease. For the customer/end user, there’s an element of future-proofing with a network that can be provisioned centrally from a network operations center (NOC). The end user isn’t forced to constantly purchase new equipment or new software applications. All of the new services, in addition to any moves/adds/changes, are provisioned from the NOC. This creates dramatic cost savings and a pleasant reduction in hassle for everyone involved. On land, such efficiencies eliminate truck roll – but on water, the truck roll (which really involves helicopters and boats) is far more costly. Remote provisioning takes care of that issue.

All satellite communications providers do not offer these advanced features, but virtually every offshore provider is offering some form of IP communications. Even so, experts in the industry believe it could be several years before IP becomes the de facto standard for offshore communications. In the meantime, those companies interested in IP via satellite are encouraged to research the technology and look for providers with a strong track record of reliability and technological innovation.

References

1. Voice Over Internet Protocol (VoIP) – "Crossing the Chasm";
2. Comp60242: Mobile Computing '10, B3.VoIP & VoWLAN;
3. VoIP and the Future of Satellite Communications By Errol Olivier.

Neural Networks

Neural Network is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected simple processing elements (neurons) working in unison to solve specific problems. Artificial Neural Networks, like people, learn by example. An Artificial Neural Network is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true of Neural Networks as well. Neural Networks can process information at a great speed owing to their highly massive parallelism.

Neural Networks have wide applicability in various walks of life. A number of applications and different areas are mentioned where use of neural network gives excellent results. Neural network models can be developed from measured or simulated data. Neural network models can also be used to update or improve the accuracy of already existing models.

Neural Networks

How do Neural Networks function?

Neural Network: It is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurones) working in unison to solve specific problems. Neural Networks, like people, learn by example. It is is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurones.

Artificial neurons are made to mimic the human brain. The defining element is the neuron; it collects signals from a number of structures and then sends out spikes of electrical activity through the axon which is split into thousands of branches. At the end of each branch the synapse converts the activity of the axon into electrical effects that inhibit or excite activity in the connected neurons. The neural network learns by changing the effectiveness of the synapses, so that the influence one neuron has on another changes (Siganus and Stergiou). Neural Networks are an idealized model of the real network.

Neural Networks operate in two modes, the using mode and the training mode. In the most basic sense the network is trained by entering a large number of inputs to the system. It is then taught how to react to each one i.e. when to fire and if so what output. In more complex problems the inputs are assigned weights that determine the response of the system (see diagram 3 adapted from Siganus and Stergiou). Pattern recognition is the most important characteristic of the networks. They generally do this by contrasting and comparing the unknown input to all it had learned in training mode then it chooses the output that is most similar.

There are two major subdivisions in Neural Networks feed-forward/bottom-up and feedback/interactive networks. In the feed-forward setup, signals are restricted to travel in one direction; input to output. This means that the output of one layer does not affect that same layer; because of this feed-forwards are mostly used in pattern recognition. In feedback (see diagram 5 adapted from works by the Japan atomic research institute) signals are allowed to travel in both directions. These networks are more powerful and dynamic; their 'state' is changing continuously until they reach an equilibrium point. Neural Networks are divided into layers; the input, hidden and output layers. The input units represents the data, the activity of the hidden(output) units is determined by the input(hidden) units and the weights on the connection between them.

What are the major advantages of Neural Networks?

Neural networks succeed where everyday computer systems produce mediocre results. They can take extremely complicated or imprecise data; extract patterns and deduce trends that are too complex to be noticed by humans or other computing techniques. In a very real way neural networks can be thought of as an expert. They are capable of creating projections given new scenarios. They are able to do this effectively because they can perform adaptive learning, self-organization, real time operations and fault tolerance. Adaptive learning is the ability to learn how to do tasks based on the data given for training or initial experience (Stergiou and Siganus). Self-organization is the creation of its own representation of the information it receives during the learning period. Real time operation is possible because neural network operations may be carried out in parallel.

The Disadvantages of neural networks

The results neural networks return are at best a good approximation of a solution, they don’t usually return an optimal solution and in some cases the results returned diverge. This is because choosing the right structure of a neural network is in itself a complex problem. The present technology cannot fully model the human brain that is, the technology does not scale up to handling billions of neurons (Champandard).

The differences between Neural Networks and conventional computing

The major difference is their approach to problem solving. Conventionally computers use an algorithmic approach; they would need to know all the specific steps to solve the problem. Therefore they are able to solve only those problems that we know how to solve. In contrast neural networks can perform tasks we don’t know exactly how to do. Artificial Neural Networks are not programmed to perform a specific task, instead very specific examples are carefully chosen to teach the system. This approach allows the network to solve the problem by itself and so its operation can be unpredictable.

Applications of Neural Networks

Neural networks have an extensive array of utilization in daily real world applications. They have been successfully applied in a diverse range of fields which include finance, medicine, engineering, geology and physics. The numerous tasks which we need to perform frequently can be done through neural network implementations which are able to perform and execute these actions reliably, effortlessly and intrepidly (humans are normally affected by fatigue and emotion).

Neural Networks in Business

Neural networks have broad application in real world business problems. In fact, they have already been successfully applied in many Industries e.g paper mills. Business is a diverted field with several general areas of specialization such as accounting or Financial analysis. Almost any neural network application would fit into one business area or financial analysis. There is some potential for using neural networks for business purposes, including Resource allocation and Scheduling. Since neural networks are best at identifying patterns or trends in data, they are well suited for Prediction or Forecasting needs including: sales forecasting, industrial process control, customer research, data validation, risk management and target marketing. Neural Networks have also been very useful in Credit Evaluation especially due to their use in credit scoring system. There are some Marketing Applications which have been integrated with neural network systems one such being The Airline Marketing Tactician (AMT). It is a computer system made of various intelligent technologies including expert systems. A feed forward neural network is integrated and trained using back-propagation to assist the marketing control of airline seat allocations. The adaptive neural approach was amenable to rule expression. Additionally, the application's environment changed rapidly and constantly, which required a continuously adaptive solution. The system is used to monitor and recommend booking advice for each departure. Such information has a direct impact on the profitability of an airline and can provide a technological advantage for users of the system. Neural Networks are also used in Monitoring and Enhancing Transportation and Communication facilities. In these fields Neural Networks are in use in the following specific paradigms: recognition of speakers in communications and recovery of telecommunications from faulty software etc. In monitoring, Networks have been used to monitor the state of aircraft engines. By monitoring vibration levels and sound, early warning of engine problems can be given. Moreover, British Rail have also been testing a similar application monitoring diesel engines.

Neural Networks in Economics

Neural Networks have been applied in Economics to a great extent. Investment analysis is one important application in this regard. Neural Networks have been extensively tested and used to predict the movement of stocks and currencies etc., from previous data. The results have been so impressive that they are replacing earlier simpler linear models. Moreover, neural networks have helped companies avoid disaster by using bankruptcy prediction. These also allow corporate organizations to evaluate and practically use their job assignment and sales forecasting neural network applications to help achieve the best results.

Neural Networks in Recognition and Matching Applications

It has been widely known that neural networks can serve as a powerful tool for pattern recognition and classification, especially when the distribution of the objective classes is unknown or can't be expressed as mathematical models.  There are also studies that have shown that neural networks can be used as a tool for feature extraction, i.e., to produce new features based on the original features or the inputs to a neural network.  The set of new features usually contains fewer and more informative features so that future classification can be conducted at a lower computational cost using only the condensed new features. There is a long list of applications of neural networks in this particular field itself, some of these being: Pattern Matching, Signature Verification, Image or Facial Recognition, Text and Speech Recognition, Three-dimensional Object Recognition, Synthetic numerical character Recognition, Hand Written word Recognition, Interpretation of multi meaning Chinese words and Texture analysis etc.

Neural Networks in Data Mining

There is also a strong potential for using neural networks for database mining that is, searching for patterns implicit within the explicitly stored information in databases. Most work in this area is applying neural networks, such as the Hopfield-Tank network for optimization and scheduling. One particular example of use of neural networks in data mining is as follows: Classification is one of the data mining problems receiving great attention recently. For that the approach of symbolic classification rules using neural networks has been appreciated. With the proposed approach, concise symbolic rules with high accuracy can be extracted from a neural network. The network is first trained to achieve the required accuracy rate. Redundant connections of the network are then removed by a network pruning algorithm. The activation values of the hidden units in the network are analyzed, and classification rules are generated using the result of this analysis. The effectiveness of the proposed approach is clearly demonstrated by the experimental results on a set of standard data mining test problems.

CONCLUSION

Neural networks are used in several applications some of them have been discussed in this paper. The Neural Network hardware can be designed in a few niche areas where the performance of the system is main issue. So in this way Neural Network is helpful for developing a system which is more robust, reliable, fast and accurate. Neural networks can learn and generalize from available data thus making model development possible even when component formulae are unavailable.

References:

1.http://suraj.lums.edu.pk/~cs631s04
2.http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html#Applications%20of%20neural%20networks
3.http://www.cs.stir.ac.uk/~lss/NNIntro/InvSlides.html#where
4.http://www.cs.brandeis.edu/~cs113/classprojects/~pdempsey/cs113/nn.html

Network Adapter

A computer communicates on the network through a network interface card or network adapter.

Network adapter plugs into the motherboard of a computer and into a network cable. Network adapters perform all the functions required to communicate on a network. They convert data from the form stored in the computer to the form transmitted or received on the cable.

Network Adapter

Network adapter

A network adapter receives data to be transmitted from the motherboard of a computer into an area of memory called a buffer. The data in the buffer is then passed through some electronics that calculates a checksum value for the block of data and adds address information, which indicates the address of the destination card and its own address, which indicates where the data is from. Each network adapter card is assigned a permanent unique address at the time of manufacture. The block is now referred to as a frame. The network adapter then transmits the frame one bit at a time onto the network cable. The address information is sent first, followed by the data and then the checksum.

A network adapter card must match both the bus of the computer it is placed in, the type of network to which it is connected and the media type to which it is attached.

The bus could be one of ISA, EISA, Micro Channel, VESA Local Bus, PCI, NuBus, PC Card (PCMCIA) or a proprietary local bus. Some computers have more than one bus, e.g. both an ISA and a PCI bus.

The network could be Ethernet, Fast Ethernet, Token Ring, ARCnet, ATM, or a proprietary network standard.

The media type could be coaxial cable, unshielded twisted pair, optical fibre, radio or wireless Infra Red.

www.ICT-Teacher.com

Cable standards in networking

One of the benefits of learning networking is that there are a lot of defined standards. This allows equipment from multiple vendors to work together without problems brought by proprietary platforms.

Network cables also come in standard specifications. You can purchase a 10BASE-T cable from five different manufacturers and each one will work with your equipment without a problem.

In this section, you will learn:

  • Standard copper and fibber optic cables;
  • Characteristics of cable standards;

Characteristics of the cable standards:

  • 10BASE-T and 10BASE-FL
  • 100BASE-TX and 100BASE-FX
  • 1000BASE-T, 1000BASE-CX, 1000BASE-SX and 1000BASE-LX
  • 10 GBASE-SR, 10 GBASE-LR and 10 GBASE-ER

10BASE-T Cable Standard

10Base-T is one of the Ethernet standards for cabling in a network environment. 10BaseT uses a twisted pair cable with a maximum length of 100 meters. Standard 10BaseT operates at 10 Mbps. It is commonly used in a star topology.

10BASE-FL Cable Standard

10BaseFL is a fibber optic cable standard designed to run at 10 Mbps. It is similar to 10Base-T, though the media type is fibber. For use up to 2000 meters.

100BASE-TX Cable Standard

100 Mbps Fast Ethernet over category 5 twisted pair cable. Maximum cable length of 100 meters.

100BASE-FX Cable Standard

100 Mbps Fast Ethernet standard over fibber cable. Can transmit data up to 2000 meters.

1000BASE-T Cable Standard

Gigabit Ethernet over twisted pair copper wires. Transmit up to 1000 Mbps. 100 meter maximum cable length. Cat5 or better required (Cat6 cabling recommended).

1000BASE-CX Cable Standard

Gigabit Ethernet over a special copper twinax cable. Up to 25 meters in length. Typically used in a wiring closet or data center as a short jumper cable.

1000BASE-SX Cable Standard

Gigabit Ethernet using a short-wavelength laser device over multimode fibber optic cable. 50 µm core (max 300 meters) or 62.5 µm core (max 500 meters). 1000Mbps maximum transfer speed.

1000BASE-LX Cable Standard

Gigabit Ethernet using long-wavelength laser transmitters over fibber optic cable. Up to 3,000 meters. Uses single mode fibber and requires SC connectors for terminating the cable.

10 GBASE-SR Cable Standard

802.3ae standard. 33 meters for 62.5µm fibber optic cable, 300 meters for 50µm cables. 10 Gbps (Gigabit per second) transfer rate.

10 GBASE-LR Standard

10 Gbps transfer rate. 10 kilometres maximum distance. Fibber optic cable.

10 GBASE-ER Standard

10 Gbps transfer rate. 40 kilometres maximum cable length. Fibber optic cable.

History of computers

Computers are a part of so many aspects of our society today that it is hard to imagine life without them. To understand and appreciate fully the impact that computers have had on our lives, it is important to understand their history.

What follows is a brief history of the development of computer technology. There are five generations of modern computer history, the first of which was considered to have occurred between 1944 and 1956. Long before the first generation of computers evolved, computer development was well on its way.

Joseph Jacquard developed a loom for weaving cloth whose operation was controlled by means of cards with holes punched in them. This card laid the foundation for computer development. In 1886, Herman Hollerith improved on Jacquard’s punched card by developing a card that could be used with electrical rather than mechanical equipment. The Hollerith (or IBM) card is still very much in use.

First generation of computers – 1944-1956: Vacuum Tubes

History of first generation of computers began in the mid-1940s during the Second World War. They were needed to create ballistic charts for the U.S. Navy.

In 1944, engineers from IBM and Howard Aiken of Harvard University developed a machine called the Mark I. This 50-foot long and 8-foot high machine was able to add, subtract, multiply, divide, and refer to data tables using punched cards.

Another computer whose advancements were spurred by the war was the ENIAC computer, developed by a partnership between the U.S. government and the University of Pennsylvania. The first all-electronic computer, based on vacuum tubes, was developed in 1946 by J. Presper Eckert and John W. Mauchley of the University of Pennsylvania. This computer could make calculations a thousand times faster than earlier devices.

In 1947, John von Neuman joined the University of Pennsylvania team and developed a method for storing programs electronically. This invention of storing programs led the way for the development of today’s computers.

Then in 1951 came the Universal Automatic Computer (UNIVAC I), designed by Remington rand and collectively owned by US census bureau and General Electric. UNIVAC amazingly predicted the winner of 1952, presidential elections, Dwight D. Eisenhower. In first generation computers, the operating instructions or programs were specifically built for the task for which computer was manufactured. The Machine language was the only way to tell these machines to perform the operations. There was great difficulty to program these computers and more when there were some malfunctions. First Generation computers used Vacuum tubes and magnetic drums (for data storage).

Second generation of computers - 1956-1963: Transistors

History of second generation of computers began between 1952 and 1963. The invention of the transistor changed the way computers were being developed. The transistor replaced the large, cumbersome tube in televisions, radios, and computers. As a result, the size of electronic machinery has been shrinking ever since. The transistor was at work in the computer by 1956. By the mid 1960s, business, universities, and the government used computers. The second generation of computers began to contain many of the things we find in computers today: printers, tape storage, disk storage, and memory.

Second generation computers also started showing the characteristics of modern day computers with utilities such as printers, disk storage and operating systems. Much financial information was processed using these computers. In Second Generation computers, the instructions (program) could be stored inside the computer's memory. High-level languages such as COBOL (Common Business-Oriented Language) and FORTRAN (Formula Translator) were used, and they are still used for some applications nowadays.

Third generation of computers - 1964-1971: Integrated Circuits

History of third generation of computers began between 1964 and 1971. These computers were characterized by the semiconductor chip which was developed in the early 1960s. Another third-generation development included the use of an operating system that allowed machines to run many different programs at once with a central program that monitored and coordinated the computer’s memory.

Fourth generation of computers - 1971-Present: Microprocessors

The fourth generation of computers was characterized by the ongoing improvement of the silicon chip. The Intel 4004 chip, developed in 1971, included all the components of a computer (central processing unit, memory, and input and output controls) on a minuscule chip. Not only was the silicon chip used for computers, but everyday household items such as microwave ovens, television sets, and automobiles with electronic fuel injection incorporated these microprocessors. Computers were becoming cheaper, smaller, and faster. In 1981, IBM introduced its personal computer (PC) and began marketing it to the general public for use in the home, office, and school. The number of personal computers in use more than doubled from 2 million in 1981 to 5.5 million in 1982. Then years later, 65 million personal computers were being used.

Fifth generation of computers - Present and Beyond: Artificial Intelligence

Many advances in the science of computer design and technology are taking place to form the fifth generation of computers. Computers that can interpret the spoken word and imitate human reasoning are evolving. It is difficult to imagine now how the computer will affect your life in the next twenty years. If only those inventors who led the way to modern computing could witness the speed of today’s computers and our society’s total dependence on them. How do you think they would react to the automatic teller machines (ATM) which let us conduct banking transactions from virtually anywhere in the world; or to computerized telephone switching centers that keep lines of communication untangled; or to supermarket scanners that calculate our grocery bills while keeping store inventory? It is difficult to imagine.

Computer Viruses

A computer virus is a potentially damaging computer program designed to affect, or infect, your computer negatively by altering the way it works without your knowledge or permission. More specifically, a computer virus is a segment of program code that implants itself in a computer file and spreads systematically from one file to another. Viruses can spread to your computer if an infected floppy disk is in the disk drive when you boot the computer, if you run an infected program, or if you open an infected data file in a program.

Computer viruses, however, do not generate by chance. Creators, or programmers, of computer virus programs write them for a specific purpose – usually to cause a certain type of symptom or damage. Some viruses are harmless pranks that simply freeze a computer temporarily or display sounds or messages. When the Music Bug virus is triggered, for example, it instructs the computer to play a few chords of music. Other viruses, by contrast, are designed to destroy or corrupt data stored on the infected computer. Thus, the symptom or damage caused by a virus can be harmless or cause significant damage, as planned by its creator.

Viruses have become a serious problem in recent years. Currently, more than 45,000 known virus programs exist and an estimated six new virus programs are discovered each day. The increased use of networks, the Internet, and e-mail has accelerated the spread of computer viruses, by allowing individuals to share files – and any related viruses – more easily than ever.

Types of Viruses

Although numerous variations are known, four main types of viruses exist: boot sector viruses, file viruses, Trojan horse viruses, and macro viruses. A boot sector virus replaces the boot program used to start a computer with a modified, infected version of the boot program. When the computer runs the infected boot program, the computer loads the virus into its memory. Once the virus is in memory, it spreads to any disk inserted into the computer. A file virus attaches itself to or replaces program files; the virus then spreads to any file that accesses the infected program. A Trojan horse virus (named after the Greek myth) is a virus that hides within or is designed to look like a legitimate program. A macro virus uses the macro language of an application, such as word processing or spreadsheet, to hide virus code. When you open a document that contains an infected macro, the macro virus loads into memory. Certain actions, such as opening the document, activate the virus. The creators of macro viruses often hide them in templates so they will infect any document created using the template.

Depending on the virus, certain actions can trigger the virus. Many viruses activate as soon as a computer accesses or runs an infected file or program. Other viruses, called logic bombs or time bombs, activate based on specific criteria. A logic bomb is a computer virus that activates when it detects a certain condition. One disgruntled worker, for example, planted a logic bomb that began destroying files when his name appeared on a list of terminated employees. A time bomb is a type of logic bomb that activates on a particular date. A well-known time bomb is the Michelangelo virus, which destroys data on a hard disk on March 6, Michelangelo’s birthday.

Another type of malicious program is a worm. Although often it is called a virus, a worm, unlike a virus, does not attach itself to another program. Instead, a worm program copies itself repeatedly in memory or on a disk drive until no memory or disk space remains. When no memory or disk space remains, the computer stops working. Some worm programs even copy themselves to other computers on a network.

Virus Detection and Removal

No completely effective methods exist to ensure that a computer or network is safe from computer viruses. You can take precautions, however, to protect your home and work computers from virus infections. These precautions are discussed in the following paragraphs.

An antivirus program protects a computer against viruses by identifying and removing any computer viruses found in memory, on storage media, or on incoming files. Most antivirus programs also protect against malicious ActiveX code and Java applets that might be included in files you download from the Web. An antivirus program scans for programs that attempt to modify the boot program, the operating system, and other programs that normally are read from but not modified.

Antivirus programs also identify a virus by looking for specific patterns of known virus code, called a virus signature, which they compare to a virus signature file. You should update your antivirus program’s virus signature files frequently so they include the virus signatures for newly discovered viruses and can protect against viruses written after the antivirus program was released.

Even with an updated virus signature file, however, antivirus programs can have difficulty detecting some viruses. One such virus is a polymorphic virus, which modifies its program code each time it attaches itself to another program or file. Because its code never looks the same, an antivirus program cannot detect a polymorphic virus by its virus signature.

Another technique that antivirus programs use to detect viruses is to inoculate existing program files. To inoculate a program file, the antivirus program records information such as the file size and file creation date in a separate inoculation file. The antivirus program then can use this information to detect if a computer virus tampers with the inoculated program file. Some sophisticated viruses, however, take steps to avoid detection. Such a virus, called a stealth virus, can infect a program file, but still report the size and creation date of the original, uninfected program.

Once an antivirus program identifies an infected file, it can remove the virus or quarantine the infected file. When a file is quarantined, the antivirus program places the infected file in a separate area of your computer until you can remove the virus, thus insuring that other files will not become infected.

In addition to detecting and inoculating against viruses, most antivirus programs also have utilities to remove or repair infected programs and files. If the virus has infected the boot program, however, the antivirus program first will require you to restart the computer with a floppy disk called a rescue disk. The rescue disk, or emergency disk, is a disk that contains an uninfected copy of key operating system commands and startup information that enables the computer to restart correctly. Once you have restarted the computer using a rescue disk, you can run repair and removal programs to remove infected files and repair damaged files. If the program cannot repair the damaged files, you may have to replace, or restore, them with uninfected backup copies of the file.

802 Standards. IEEE 802.2, 802.3, 802.5, 802.11

The Institute of Electrical and Electronics Engineers is a standards setting body. Each of their standards is numbered and a subset of the number is the actual standard. The 802 family of standards is ones developed for computer networking.

In this section, you will learn:

- What the 802.2, 802.3, 802.5, 802.11 standards encompass;

- Features, topology, and network cabling for each of these standars.

First, let's discuss 802. IEEE, or Institute of Electrical and Electronics Engineers, is a standards setting body. They create standards for things like networking so products can be compatible with one another. You may have heard of IEEE 802.11b - this is the standard that IEEE has set (in this example, wireless-b networking).

In this section, we will look at several networking technologies: 802.2, 802.3, 802.5, 802.11, and FDDI. Each of these is just a standard set of technologies, each with its own characteristics.

802.2 Logical Link Control

The technical definition for 802.2 is "the standard for the upper Data Link Layer sublayer also known as the Logical Link Control layer. It is used with the 802.3, 802.4, and 802.5 standards (lower DL sublayers)."

802.2 "specifies the general interface between the network layer (IP, IPX, etc) and the data link layer (Ethernet, Token Ring, etc).

Basically, think of the 802.2 as the "translator" for the Data Link Layer. 802.2 is concerned with managing traffic over the physical network. It is responsible for flow and error control. The Data Link Layer wants to send some data over the network, 802.2 Logical Link Control helps make this possible. It also helps by identifying the line protocol, like NetBIOS, or Netware.

The LLC acts like a software bus allowing multiple higher layer protocols to access one or more lower layer networks. For example, if you have a server with multiple network interface cards, the LLC will forward packers from those upper layer protocols to the appropriate network interface. This allows the upper layer protocols to not need specific knowledge of the lower layer networks in use.

802.3 Ethernet

Now that we have an overview of the OSI model, we can continue on these topics. I hope you have a clearer picture of the network model and where things fit on it.

802.3 is the standard which Ethernet operates by. It is the standard for CSMA/CD (Carrier Sense Multiple Access with Collision Detection). This standard encompasses both the MAC and Physical Layer standards.

CSMA/CD is what Ethernet uses to control access to the network medium (network cable). If there is no data, any node may attempt to transmit, if the nodes detect a collision, both stop transmitting and wait a random amount of time before retransmitting the data.

The original 802.3 standard is 10 Mbps (Megabits per second). 802.3u defined the 100 Mbps (Fast Ethernet) standard, 802.3z/802.3ab defined 1000 Mbps Gigabit Ethernet, and 802.3ae define 10 Gigabit Ethernet.

Commonly, Ethernet networks transmit data in packets, or small bits of information. A packet can be a minimum size of 72 bytes or a maximum of 1518 bytes.

The most common topology for Ethernet is the star topology.

802.5 Token Ring

As we mentioned earlier when discussing the ring topology, Token Ring was developed primarily by IBM. Token ring is designed to use the ring topology and utilizes a token to control the transmission of data on the network.

The token is a special frame which is designed to travel from node to node around the ring. When it does not have any data attached to it, a node on the network can modify the frame, attach its data and transmit. Each node on the network checks the token as it passes to see if the data is intended for that node, if it is; it accepts the data and transmits a new token. If it is not intended for that node, it retransmits the token on to the next node.

The token ring network is designed in such a way that each node on the network is guaranteed access to the token at some point. This equalizes the data transfer on the network. This is different from an Ethernet network where each workstation has equal access to grab the available bandwidth, with the possible of a node using more bandwidth than other nodes.

Originally, token ring operated at a speed of about 4 Mbps and 16 Mbps. 802.5t allows for 100 Mbps speeds and 802.5v provides for 1 Gbps over fibber.

Token ring can be run over a star topology as well as the ring topology.

There are three major cable types for token ring: Unshielded twisted pair (UTP), Shielded twisted pair (STP), and fibber.

Token ring utilizes a Multi-station Access Unit (MAU) as a central wiring hub. This is also sometimes called a MSAU when referring to token ring networks.

802.11 Wireless Network Standards

802.11 is the collection of standards setup for wireless networking. You are probably familiar with the three popular standards: 802.11a, 802.11b, 802.11g and latest one is 802.11n. Each standard uses a frequency to connect to the network and has a defined upper limit for data transfer speeds.

802.11a was one of the first wireless standards. 802.11a operates in the 5Ghz radio band and can achieve a maximum of 54Mbps. Wasn't as popular as the 802.11b standard due to higher prices and lower range.

802.11b operates in the 2.4Ghz band and supports up to 11 Mbps. Range of up to several hundred feet in theory. The first real consumer option for wireless and very popular.

802.11g is a standard in the 2.4Ghz band operating at 54Mbps. Since it operates in the same band as 802.11b, 802.11g is compatible with 802.11b equipment. 802.11a is not directly compatible with 802.11b or 802.11g since it operates in a different band.

Wireless LANs primarily use CSMA/CA - Carrier Sense Multiple Access/Collision Avoidance. It has a "listen before talk" method of minimizing collisions on the wireless network. This results in less need for retransmitting data.

Wireless standards operate within a wireless topology.

Introduction to Networking
By: Azhar Qureshi
September 2010

Peetabeck Academy, Fort Albany, Ontario

Network topology - Star, Bus, Mesh, and Ring topologies

Topology of a network is its physical layout. You should be able to identify, based a picture or description, the star, bus, mesh, and ring topology.

Star topology is the most popular network topology in businesses today. It consists of nodes connected to a central switch or hub. If you have a home network, you are probably using the star topology.

Bus topology is one which consists of all of the workstations connected to a single cable. This topology is frequently in coax, or 10Base2, networks. The bus network has a terminator on each end of the network. If a terminator is not present or if there is a problem in the line (e.g. NIC card failure, network disconnected from PC), all workstations on that line lose connectivity.

Mesh topology is one which has all of the workstations connected to each other. This topology is typically only used when high availability is a requirement. It is expensive to maintain and troubleshoot.

Ring topology is one which all of the computers are connected in a loop and data is passed from one workstation to another. This is most common in a token ring environment where a "token" is sent with data from one node to another until it finds its destination.

Network Topology is the physical layout of the network. This concept is the foundation for understanding corporate networks and the technologies used to make them function.

In this section, you will learn:

  • What a network topology is;
  • How to identify the different network topologies.

What is a Network Topology?

There are basically two components to a network. Devices on the network that want to share resources or information and the medium which allows the communication to occur. A Network Topology is the physical layout of the computers, servers, and cables. There are four topologies mentioned in this learning item: star, bus, mesh, and ring. You can add wireless to the list as a growing popular option for network topology.

Understanding Network Topologies

Your corporate network may be a combination of several of these topologies. You may have one topology in your data center, a different in your offices, and a third in your conference rooms. You need to be familiar with each of the topologies, their characteristics, and what they look like drawn out.

Star Topology

The most popular topology for business today - the star topology consists of all of the nodes on a network connected to a central switch or hub. A node is a device attached to the network - such as a computer.

star topology

Each node on the network has a cable back to the central switch. If one cable fails to a node, only that node (computer) is affected. You can combine several switches or hubs to create several stars, all connected together.

The Star topology is very inexpensive to maintain versus other topologies. 10BaseT is an example of Star topology. Think of the star topology as a big wheel. At the center of the wheel is a switch or hub and each spoke going out from the center goes to a node.

Bus Topology

Bus topology is one which all of the devices on the network are connected with a single cable with terminators on each end. This single cable is often referred to as a backbone or trunk.

bus topology

The typical Bus network uses coax as its cable. Coax is a cable similar to what you use for your cable TV. Coax is also referred to as 10Base2.

The upside to using coax is that it is inexpensive, easy to install, and is not as susceptible to electromagnetic interference as twisted pair cable is.

The downside for a coax network is the speed is limited to 10 Mbps (Megabits per second) and that is an interruption occurs in the cable, all of the nodes (workstations) on the cable will lose connectivity. If a NIC fails or a cable is disconnected at any of the points in the network, it will not be terminated properly so all of the computers will lose connectivity to the network.

Mesh Topology

A mesh topology is one which all of the nodes are directly connected with all of the other nodes.

mesh topology

A mesh topology is the best choice when you require fault tolerance, however, it is very difficult to setup and maintain.

There are two types of mesh network: full mesh and partial mesh. A full mesh is one which every workstation is connected to the other ones in the network. In a partial mesh, the workstations have at least two NICs with connections to other nodes on the network. Mesh networks are commonly used in WANs.

Ring Topology

The ring topology is one which the network is a loop where data is passed from one workstation to another.

ring topology

Commonly, you find the ring topology with token ring networks. Token ring networks are defined by IEEE 802.5 and were primarily developed by IBM. The token ring network is designed to transmit a token, or a special frame, designed to go from node to node around the ring. As the frame passes, if a workstation needs to transmit data, it modifies the frame, attaches its data and sends it on. If the data is intended for the next workstation on the network, it receives the data and the information stops at that workstation. If it is intended for somewhere else on the network, the data is retransmitted around the ring until it finds its intended location. Once the data finds its new home, a blank token is transmitted and another workstation can attach data and then that data travels around the ring.

There is a token holding timer to prevent a workstation from transmitting too much data. This protocol ensures all workstations on the network get an opportunity to send data. The original specification could only operate up to 16 Mbps though newer Fast Token Ring networks can transmit up to 1 Gbps (gigabit per second).

Advantages for token ring networks include a 4k maximum frame size, longer distance capabilities than Ethernet, and each station is guaranteed access to a token at some point. Ethernet is a shared access medium meaning each workstation has equal access to the available bandwidth at any given time.

The recommended distance for Type 1 cabling on a token ring network is 300 meters, on Unshielded Twisted Pair (UTP) cabling, about 150 meters. More details will be discussed about token ring shortly.

Introduction to Networking
By: Azhar Qureshi
September 2010

Peetabeck Academy, Fort Albany, Ontario