Sunday, May 23, 2010

Switches ( forwarding, flooding, filtering )

____________________________________________

A Cisco switch will do one of three things with an incoming frame:

 -  forward it
 -  flood it
 -  filter it

To make this decision, the switch consults its MAC address table to check if there's an entry for the destination MAC address - but first, the switch will actually check to see if there's an entry for the source MAC address of the frame, because it's that source MAC that the switch will use to actually build the table in the first place!!


We have a hub where host A and B is connected, this hub is connected to switch. Two other systems are connected to the same switch. As soon as we connected the switch connected to the network just now, so it does not know the MAC address of any systems. If we have a router it has a dynamic routing protocol to discover the MAC address of the systems but there NO such dynamic switching protocol.

There is only one way switch can know the MAC address of the systems, i.e. by examining the header of the incoming packet. The MAC address can be statically configured in the switch though.

Suppose we have a packet from Source aa-aa-aa-aa-aa-aa to go to the destination cc-cc-cc-cc-cc-cc. Initial switch does not know any thing about the destination MAC address. So what it will do is, it will make an entry of the source MAC address (aa-aa-aa-aa-aa-aa) in its switch table and will flood the packets. Flooding means it will send the packet out to each of its port except the one it came in from.

This kind of frame is know as "Unknown Unicast Frame". because the information needs to be sent to only one system whose MAC address is unknown. Thus an unkown unicast frame is always flooded.

After the flooding is done host C (cc-cc-cc-cc-cc-cc-cc) sends its own frame, and the switch thus makes an entry in the MAC address table about host C. As switch already know about Host A's MAC address the switch is not going to flood this time, instead it will send (Forward) it to only A. Thus that's how the switch learns the MAC address of all the Hosts on the network dynamically.

Now what happens if the Host A sends a frame to Host B. We have a hub connected in the middle. So here's what happens :


 -  Hub receives the incoming frames
 -  hub duplicates them and sends the frame to all other ports
 -  Host B receives the frame and also the switch receives the frame
 -  When the switch receives the frame it looks into the MAC address table
 -  Switch identifies that source and destination are present on the same port
 -  Switch then fiters the frame (i.e. it kills the frame). Switch never sends a frame back to the same port it came in from.

There are always exception to rules in networking but there is no exception to the rule :
"Switches never send a frame back out the same port it came in on."

Flooding :-
Flooding is performed when the switch has no entry for the frame's destination MAC address. When a frame is flooded, it is sent out every single port on the swtich except the one it came in one. Unknown unicast frames are always flooded.

Forwarding :-
Forwarding is performed when the switch does have an entry for the frame's destination MAC address. Forwarding a frame means the frame is being sent out only one port on the switch.

Filtering :-
Filtering is performed when the switch has an entry for both the source and destination MAC address, and the MAC table indiacates that both addresses are found off the same port.

Broadcasting :- There is one other frame type that is sent out every port on the switch except the one that received it, and that's a broadcast frmae, Broadcast frames are intended for all hosts, and the MAC broadcast address is ff-ff-ff-ff-ff-ff ( or FF-FF-FF-FF-FF-FF, as a MAC address's case does not matter. )
____________________________________________

Switch MAC address tables

____________________________________________
The MAC address table is the table where a switch manages all its MAC addresses. The static MAC addresses are those that are entered in the switch or switch's own MAC addresses while the dynamic MAC address are those that the switch learn by discovering the network hosts ( the systems connected to the switch). On a Cisco switch following are the commands to see a MAC address table.


____________________________________________

Hubs vs Switches

____________________________________________
With hubs, we've one big collision domain consisting of all connected hosts. When hosts are connected to their own switch ports, they each have their own individual collision domain.

Hubs only allow one device to transmit at a time, resulting in shared bandwidth. Switches allow hosts to transmit simultaneiously.

When one host connected to a hub sends a bradcast, every other host receives that bradcast and there's nothing we can do about it. When a host connected to a switch sends a broadcast, every other host receives it by default - but there is something we can do about that, as you'll see in the VLAN section of this course.

The universal symbol for switch is a box with four arrows in opposite directions. When systems are connected to switch just as shown in the following network diagram each of the system has its own collision domain. So the collision can not occur.

Switches does not break up in broadcast domain. All the systems are in a single broadcast domain. This is by default setting and can be configured in the switch.

Microsegmentation is a term sometimes used in Cisco documentation to describe the "one host, on collision domain" effect of that last bullet point. It's not a term I hear a great deal in the field, and you might not either, but it's a good term to know for Cisco exams.

____________________________________________

Repeters / Hubs / Bridge

____________________________________________

Repeaters:

With many networking terms, the name is indeed the recipe, and that's very true of a repeater. A repeater's job is to repeat an electrical signal, the form that our data has taken to be sent across a cable. Remember, "it's all ones and zeros!"

The repeater takes an incoming signal and then generates a new, clean copy of that exact signal. This prevented maximum cable lengths from stopping transmissions, and also helped to wanrd off attenuation - the gradual weakeing of an electric signal as it travels.


Hub:

A hub is basically the same as a repeater, but the hub will have more ports. That's the only difference between the two. (Some hubs have greater capabilities than others, but a "basic" hub is simply a multiport repeater.)

Neither hubs nore repeaters have anything to do with the Data Link Layer of the OSI model, nor do they perform any switching at all. Hubs and repeaters are strictly Physical layer devices, and that's where the trouble comes in. For our next example, we'll consider a hub with four PCs connected to it.

To prevent this, a host on a shared Ethernet segment will use CSMA/CD (Carrie Sense Multiple Access with Collision Detection). To review, here's the CSMA/CD process:

 -  A host that wants to send data will first "listen to the wire", meaning that it checks the shared media to see if it's in use.
 -  If the media is in use, the host backs off for a few milliseconds before chcking again.
 -  If the media is not in use, the host sends the data.

If two PCs happen tos end data at the exact same time, the voltage on the wire will actually change, indicating to the hosts that there has been a data collision.

The two PCs that sent the data will generate a  "jan signal", which indicates to the other hosts on the shared media that they should not send data due to a collision.

Those two PCs both invoke a backoff timer, also in milliseconds. When each host's random timer expires, they will each begin the entire process again by listening to the wire. Since the backoff timer value is totally random, it's unlikely the two hosts will have the same problem again.


The above topology is never suggested as it eats up a lot of bandwidth. As each time each system broadcasts some message its highly unlikely that each system needs it, but they all get it.

Bridges:

The bridges were introduced to the networks so that we can make smaller collision domains that results in fewer collisions. Typically a bridge is placed between multiple repeaters and hubs More collision domains does not means that we will have more collisions but as we segment one single network into two there will be fewer collisions The network segments are like logical divisions in physical network.


Bridges do NOT help to lower the number of broadcasts. So we still have one big broadcast domain.
____________________________________________

Switching

____________________________________________
Switching

 -  Repeaters, Hubs, Bridges
 -  Building the MAC table
 -  "Flood, Filter, or Forward?"
 -  Frame Processing Methods
 -  Virtual LANs
 -  Cisco Three-Layer Switching Model
 -  Introduction to STP
 -  Basic Switch Security
 -  Port Security Defaults, Options, and Configuration
____________________________________________

Cables overview and WAN cable....

____________________________________________

Cisco routers will use serial cables for connecting using their serial interfaces (typically frame relay). In home labs, you may connect Cisco router seral interfaces directly with a DTE/DCE cable.

Crossover calbes are used to connect two like devices, typically two switches.

Rollover cables are used to connect a laptop's seral port to the router or swithch COnsole port.

Straightthrough cables are used to connect a PC to a switchport.

Watch the cable types and the cable lengths - any cable over 100 meters is cause for alarm.
____________________________________________

Basics of MAC address ( Media Access Control )

____________________________________________

Ethernet / NIC / Physical / LAN / BIA / MAC Addressing (alieases of MAC address)

MAC address :- short for Media Access Control

The MAC address is used by switches to send frames to the proper destination, as you'll see in the LAN Switching section. The entire MAC address is a 48-bit address that looks  a little something like :
aa-bb-cc-11-22-33

That MAC address actually has two parts, the first being the Organizationally Unique Identifier (OUI). in the ex "aa-bb-cc" is the OUI. The OUI is assigned to hardware vendors by the Institure of Electrical and Electronics Engineers (IEEE). A given OUI is assigned to one and only one vendor.

The second half of the MAC address is a value not yet used by that particular vendor. Looking at the MAC address example given earlier, we now know that:

 -  the OUI is aa-bb-cc
 -  the vendor has not yet used 11-22-33 with that particular OUI, so the vendor is doing so now. If a single vendore like CISCO has two different OUIs assigned, same second half of MAC address can be used for two different OUIs.

The MAC address is sometimes called the physical address because it physically esists on the network card. The address is burned into the card, giving it yet another name - the Burned-In Address (BIA).

As with IP addresses, we have broadcast and multicast MAC addresses. It's a good idea to be able to identify these addresses, and here's how to do it!

The broadcast MAC address is the "all-Fs" address : ff-ff-ff-ff-ff-ff (or FF-FF-FF-FF-FF-FF, as case does not matter in hexadecimal)

The is a range of multicast MAC addresses, and the first half of a multicast MAC address is always "01-00-5e". The seond half of a multicast MAC address will fall in the range 00-00-00 through 7F-FF-FF. watch out that 1!!!
____________________________________________

Monday, May 3, 2010

Standard Ethernet Cable types

____________________________________________ 
A standard Ethernet cabling type is Category 5 Unshielded Twisted-Pair commonly known as CAT 5 UTP. The connector on the end of a typical Cat5 UTP cable is an RJ-45 connector. This type of connector has a tab on the bottom that snaps into place when the connector is correctly placed into the device. (You can usually hear the "snap" sound, unless you're in a very loud wiring closet !)

The cable will contain separate wires inside, and the endpoints of these wires are referred to as pins. While you now know that bits are sent over these wires, it's important to know that the same set of pins is always used to transmit, and a separate set of pins is always used to receive.

Pins1 and 2 Transmit
Pins 3 and 6 Receive

Crosstalk is caused by the electromagnetic interference mentioned a moment ago. Basically, a signal "crosses over" from one pair of cables to another, causing the signals to become unusable.

NEXT (near - end crosstalk) is a condition generally cause by crossed or crushed pairs of wires. The conductors inside the wires don't even have to be exposed - but if the conductors are too close, the signal traveling on one wire can actually interfere with the signal on another wire. The "near-end" is a relative term, referring to the end of the cable being tested (as opposed to far-end crosstalk, or FEXT)

In a typical RJ-45 connection, the crosstalk is actually at its highest level as data enters the cable.

You may occasionally see the tern PSNEXT. This is short for "Power Sum Near End Cross Talk", and refers to the calculation carried out when a NEXT test is run. When the NEXT results for each pair of wires is added, the result is the PSNEXT value.

In the following exhibit, we've got three separate physical connections:
 -  Cable 1 :- Straight-through cable
 -  Cable 2 :- Crossover calble
 -  Cable 3:- Rollover Cable




 -  A laptop connect to a switch (Cable 3)
Here we need a rollover cable. All eight wires in the cable will "roll over" to another pin at the remote end, with the wire on Pin1 and one end rolling over to Pin 8 at the other end, the wire on Pin 2 a one end rolling over to Pin 7 at the remote end, and so forth.

You may also need an adapter for your rollover cable, since one end of the cable is a DB-9 connector, and few if any of today's laptops have such a port. You probably do have USB ports on your laptop, and you can get an adapter that allows you to connect a rollover cable to your laptop's USB port from just about any cable dealer.

A for the connection to the switch, you need to make sure you connect the RJ-45 connector on the other end of the rollover cable tot the Colsole port of the switch, I'll drive this point home at lease one more time elsewhere in the course....

 -  Two switched connected to each other (Cable 2) :-  we may have occasion to connect two similar devices directly with Ethernet, which can cause a problem since both devices will use the same pair of wires for transmitting data.
It's very common to connect two switches to allow them to send data over that connection, called a trunk. You'll learn all about the particulars of trunking in you CCNA studies, but the first thing we have to do is make sure we have the correct cable!

Here is need a crossover cable. The wire connected to Pin 1 on one side will no longer be connected to Pin 1 on the other, as it was in a straight-through cable. Four wires will "cross over" in a crossover cable :
Local Cable End                       Remote Cable End
Pin 1                                                   Pin 3
Pin 2                                                   Pin 6
Pin 3                                                   Pin 1
Pin 6                                                   Pin 2

 -  A PC connected to a switch (Cable 1) :- here we need a straightthrough cable. A straightthrough cable is used to connect a PC to a switch or hub. In a straight-through cable, the wire connected to Pin1 on one side is connected to Pin 1 on the other, the wire connected to Pin 2 on one side is connected to Pin2 on the other, and so forth.

So physically if we see the connector diagrams, if the colors of the cables are exactly the same its a straigtthrough cable, if 4 colors are changed its a crossover cable, and if all the colors change its a roll over cable.
____________________________________________

Sunday, May 2, 2010

Ethernet Standards

____________________________________________ 
The Ethernet standard you're most likely to be familiar with is 10Base-T, specified by IEEE 802.3. The "T" stands for twisted-pair cable, and the maximum length of a 10Base-T copper cable is 100 meters. The "10" refers to the 10 MegaBits Per Second (MBPS) capacity.

You may be asking "why twist the cable pairs?" Twisting pairs of wires inside the cable cuts down on the possibility of electromagnetic interference, whether that interference comes from another cable or an outside source - elevators are notorious for generating such interference.

In the previous illustrations, we looked at a network with a single coaxial cable and multiple hosts connected to that coax cable. That topology was used by the first Ethernet standards, 10Base5 and 10Base2.

The sole physical components were the Ethernet cards in the computers and coaxial cable, which is the topology we looked at in the previous example. The cable made up a bus that all the connected devices would use. This type of bus is referred to as a shared bus.

Ethernet is considered a logical bus topology.

The ending numbers in the terms "10Base5" and "10Base2" allegedly refer to the limit on the length of the cable, expressed in units of 100 meters. This is true for 10Base5; the limit on the cable length is 500 meters. It's not quite accurate for 10Base2, though; the limit on that cable is 185 meters, NOT 200 meters.

Fast Ethernet: -
Fast Ethernet is defined by IEEE 802.3u, and has a maximum capacity of 100 MBPS. Fast Ethernet copper cables also have a maximum cable length of 100 meters.

Gigabit Ethernet :-
Defined by IEEE 802.3z and 802.ab, Gigabit Ethernet has a maximum capacity of 1000 MBPS, also expressed as 1 GBPS (GigaBits Per Second). The maximum cable length is 100 meters here as well, but we cannot use a regular copper cable for Gigabit Ethernet.

Ethernet runs at 10 MBPS, defined by IEEE802.3, and its copper cable has maximum length of 100 meters. Variations include 10Base-T, 10Base-2, and 10Base-5, with the last two involving a shared cable bus.

Fast Ethernet runs at 100MBPS, is defined by IEEE 802.3u, and its copper cable has a maximum length of 100 meters.

Gigabit Ethernet runs at 1000MBPS (1 GBPS ), is defined by IEEE 802.3z, and also has a 100 meter cable length maximum - but it cannot use copper cabling.
____________________________________________

CSMA / CD

____________________________________________ 

With each host connected to its own switch port, we no longer have to worry about collisions when hosts send data simultaneously. In the old days of networking, though, that wasn't the case. While you may not see the following physical topology very often in your career, if at all, I'm presenting it here so you know how hosts on a shared Ethernet segment practice CSMA/CD - Carrier Sense Multiple Access with Collision Detection.



___________________________________________

Port Numbers

____________________________________________ 

The port number system works beautifully, but naturally the hosts need to agree on what port is used for a given protocol. In the previous example, if 10.1.1.1 used TCP port 45 for Telnet and 10.1.1.2 used TCP port 55, we'd have some serious problems.

That's why most protocols use the same port number at all times, and these port numbers are referred to as well-known port numbers. All port numbers below 1024 are reserved, well-known port numbers -- but you don't have to memorize 1024 numbers for the exams!

Some Common TCP Ports :
 -  FTP :- File Transfer Protocol - Uses TCP ports 20 and 21
 -  SSH :- Secure Shell - Uses TCP port 22
 -  Telnet uses TCP port 23
 -  HTTP :- HyperText Transfer Protocol - uses TCP port 80
 -  POP3 :- Post Office Protocol 3 - uses TCP port 110
 -  SSL - Secure Socket Layer - uses TCP port 443

Some Common UDP ports:
 -  DHCP :- Dynamic Host Control Protocol - uses UDP ports 67 and 68
 -  TFTP :- Trivial File Transfer Protocol - uses UDP port 69
 -  SNMP :- Simple Network Management Protocol - uses UDP port 161

Protocols using both TCP and UDP ports
 -  DNS :- Domain Name Service - uses UDP and TCP port 53
 -  The port number 24 is reserved in both UDP and TCP for private mail systems. 

With Voice over IP (VoIP) becoming more and more commonplace in today's networks, it couldn't hurt to know that the entire range of UDp ports from 16384 - 32767 are reserved for voice traffic. 

On a CISCO router to see the list of well know ports use the following command :-
R1(config)#access-list 100 permit tcp any any eq ?



____________________________________________

Socket

____________________________________________ 
A socket may sound like something physical on the PC, but it's not. The socket is simply a combination of IP address and port number. For example, the socket on 10.1.1.2 for port 69 is 10.1.1.2:69. That socket can also be expressed with this format :

(IP address, transport protocol, port number)

That would make the TFTP socket on that PC (10.1.1.2, UDP, 69)

____________________________________________

Monday, March 8, 2010

Need of port number in networking

____________________________________________
If you're not familiar with MAC or IP addressing., were going to cover that in another section, but for now it's enough to know that when two hosts communicate on a network, they're using these MAC and IP addresses as the destination when the data is sent.


So far, so good. But what if one host is sending multiple flows of information to the remote host? Let's say that the PC at 10.1.1.1 is sending three different kinds of information to the PC at 10.1.1.2:

    -   transferring a file via Trivial File Transfer Protocol ( TFTP )
    -   email via Simple Mail Protocol ( SMTP )
    -   opening a remote connection via Telnet

If you're not familiar with those three protocols, don't worry about it - you will be before you're done with this course. For now, it's enough to know that one PC is sending three different types of information to the other, and the MAC and IP source and destination addresses for all three transmissions is going to be the same. How can the receiving host tell TFTP from SMTP if that's the case?

We need a way for the recipient to differentiate on data flow from the other, and since the source and destination MAC and IP addresses will be the same for all three flows, that won't do. What will do is the TCP or UDP port number, While these three data flows will have the same Layer 2 (MAC) and Layer 3(IP) source and destination addresses, they'll have different, pre-assigned port numbers.

Port Numbers Multiplexing :- Mixing of data to different ports when transmitted.
____________________________________________

TCP features vs. UDP features compared with headers

____________________________________________
All of the features - the three-way handshake, windowing, sequence numbering, error detection and recovery  are all TCP features. UDP doesn't use any of them. Two questions come to mind :

      -   Why doesn't UDP offer these features?
      -  Why in the world do we use UDP for anything?

A look at the TCP and UDP headers will answer both of those questions!!

The TCP header


UDP Header


The UDP can't perform any of those TCP features because USO literally can't offer them. The UDP header has no sequence number field, no ack number field, no ACK bit, no SYN bit, and no window field.

The TCP and UDP headers have only three values in common|:
   -   Source port
   -  Destination port
   -  Checksum

Now if UDP can not offer all these features, why do we use it in first place. That question can really be answered with one word .........."overhead".
The TCP header is much larger that the UDP header. That header is being applied to every segment, and that adds up!!! UDP's advantage over TCP is that its header is much smaller than TCP's.

___________________________________________

Sunday, March 7, 2010

Windowing technique

____________________________________________

Windowing refers to the amout of data that a data sender is allowed to transmit without waiting for an ack. In this case, the size of the window is 2400 bytes, meaning that the data seder can transmit 2400 bytes before it has to stop and wait for an ack,

The data recipient decides the size of the window, not the sender. This gives the recipient some control over how much data is sent ("flow control").



The term sliding window refers to this dynamic adjustment of the window size.
UDP does not have windowing capabilities.

____________________________________________

Friday, February 26, 2010

TCP’s Error Detection / Error Reovery Feature

____________________________________________
TCP’s Error Detection / Error Reovery Feature

Before we take a look at how TCP performs both error detection and error recovery, we need to draw a very clear line between those two terms. They are not the same thing!
-    Error detection is finding an error
-    Error recovery is doing something about the error

TCP does both, and it uses both a sequence number and an acknowledgement number (“ack”) in the TCP header to do so. In the following example, one host is sending four segments to another host. Each of the segments has a sequence number. That sequence number tells the recipient in what order to reassemble the segments, and it’s also a fundamental concept in error detection and recovery.  ACK is nothing but the next segment number receiver would like to see.

For simplicity’s sake, we’ll assume the first segment has a sequence number of 100, and we’ll add 100 to the subsequent sequence numbers. (Remember, we’re at the Transport layer – these are segments!)


The recipient will now send a segment back that contains no data, but does have an ack number set. You might think that the ack number would reflect the last sequence number received, but that’s not quite right. The ack number will actually indicate the next sequence number the data recipient expects to see!

Like in our example above the receiver sends a ACK 500 signal back to the sender to let sender know that it got all the frames. In case Seq200 is lost during the transmission the Receiver sends a ACK200 at the end of the transfer. The sender receives ACK200 and sends the SEQ 200 back to the receiver. Receiver then sends a ACK500 to the receiver to notify the sender about receiving all the frames.


What if the acknowledgment itself is lost? The send will wait for life time otherwise.

This entire process revolves around two things:
-    The sender is waiting for a positive message from the recipient that the data was received.
-    If that message isn’t received, the data is retransmitted.
That’s why we call this entire process Positive Acknowledgement with Retransmission (PAR).


____________________________________________

TCP’s “ Three-Way Handshake ”

____________________________________________

TCP’s “Three-Way Handshake”

With TCP, there’s work to be done before data transmitted. The two devices have to agree on some basic parameters before segments can be sent – and this negotiation has the curious name three-way handshake. If that’s the first time you’ve heard this term, you’re probably wondering how a handshake can be three-way! They again, maybe you don’t want to know – but to pass the CCENT and CCNA exams, we gotta know! Let’s take a step-by-step look at this process. .

Before the sender can start sending, there’s going to be negotiation between the two devices regarding rules for data transmission, That negotiation is the three-way handshake itself, which begins with the sender transmitting a TCP segment with the Synchronization(“SYN”) bit set. The primary value being negotiated here is the TCP sequence number, which we’ll discuss in more detail in the next section. This is the first part of the three-way handshake.

The recipient responds with a TCP segment with both the synchronization and acknowledgement bits set – a “SYN/ACK”. This is part two of the three-way handshake.

The sender responds with an ACK, and the three-way handshake is complete.

UDP does not use a three-way handshake.

In addition to the orderly construction of the communication channel, TCP uses the FIN (“Finish”) to bring the channel down when the communication is closed.

____________________________________________

Transmission Control Protocol ( TCP ) vs. User Defined Protocol ( UDP )

____________________________________________

TCP :
-    Guaranteed delivery
-    Error detection via sequence and ACK numbers
-    Windowing
-    “Connection-Oriented”
UDP:
-    “best-effort” delivery, but no guarantee of delivery
-    No error detection
-    No windowing
-    “Connectionless”

____________________________________________

Purpose of using OSI and TCP/IP model

____________________________________________
So Why do we go through all of this models…..

It’s natural to ask why we use networking model in the first place. It’s a good question, and there are some good answers!

Networking models do help software vendors create products that are interoperable. (At least, we hope they’re interoperable.) That doesn’t affect us directly as network admins, but two uses of these models affect up directly both as admins and as students.

Breaking networking operations up into smaller parts make it easier to learn networking in the first place. By using the OSI model in particular, you can take a structured approach to your learning.
-    First, learn about cables and physical specifications (L1)
-    Then learn about switches and MAC addresses (L2)
-    Then start on routing (L3)

Using the OSI model to structure your troubleshooting approach is a real help, too. I always tell students to “start troubleshooting at the physical layer”, and you’ll see what I mean in the Troubleshooting section of the course. There are two kinds of troubleshooters in the world:
-    Those who have a structured approach
-    Those who don’t and are basically throwing stuff out there and hoping something works

____________________________________________

Thursday, February 18, 2010

TCP / IP Netowrk Model

____________________________________________
This model is another way to look at the overall data transport process, and it also uses layers to illustrate the process. However, the TCP/IP model uses only for layers to do so. For the CENT, CCNA, and any entry-level certification exam from another vendor, it's a very good idea to know.

    -  the layers of both the TCP/IP and OSI model
    -  the responsibilities of each layer
    -  how the layers map from one model to another



The Application layer of the TCP/IP model maps to the top three layers of the OSI model (Application, Presentation and Session). Everything that the top three layers of the OSI model do is performed by the TCP/IP model’s Application layer.

The Transport layer of the TCP/IP model maps directly to the Transport layer of the OSI model. TCP and UDP both operate at this layer, and data takes the form of segments.

The Internet layer of the TCP/IP model maps to the Network layer of the OSI model. Both layers are responsible for routing through the use of IP addresses, static routes, and dynamic routing protocols.

(You will occasionally see some non-Cisco documentation call this layer the Internetwork layer, but “Internet” is the name used in Cisco documentation.)

Finally, the Network Access layer of the TCP/IP model maps to the Data Link and Physical layers of the OSI model.
____________________________________________

The Data Transmission Process

____________________________________________
When the end user sends data, that data will go through all seven layers of the OSI model. The data is broken up into smaller and smaller parts beginning at Layer 4 ( The Transport Layer ) until it's in the form of electric signals that can be sent across the physical media.

As the data flows down the OSI model, it's referred to by different terms. You really have to master these and watch for them on your exams. There are four different terms you need to know :

 - At the Application, Presentation, and Session layers, data is simply called "data". (these three layers has nothing to do with the data breaking )
 - At the Transport layer, data is placed into segments.
 - At the Network layer, data is placed into packets.
 - At the Data Link Layer, data is placed into frames.
 - Finally, at the Physical layer, data takes the form of bits - and remember, it's all ones and zeros!!!

If I mention "segments", you should know I'm discussing the Transport layer of the OSI model without any other hints, because you might not get any other hints!!!!

As data flows down the OSI model, each layer adds a header that will be removed by the same layer on the other end of the session. These headers are layer-spedific in that the Network layer couldn't care less about the contents of any header except the Network layer on the other end of the session.

As an end user enters data for transmission to a remote host, the first six layers of the OSI model will add a layer-specific header that contains information to be read by the same layer of the OSI model at the remote location. Note that Layer 2, The Link layer, adds both a trailer and a header.


The combination of data and a layer-specific header is called a Protocol Data Unit(PDU). There's a PDU for each layer; that is, the combination of data and L7 header information is called an L7 PDU, the data and L6 header information is called an L6 PDU, and so forth.

After the data is successfully transmitted by the Physical layer to the remote location , the data begins to travel back up the mode. Each layer will remove the header added by its counterpart - that is, Layer 3 removes the L3 header and reads it, L4 removes the L4 header and reads it, and so forth.

The term same-layer interaction describes the process of a give OSI layer removing the header placed on the data by the same layer on the sending side. For example, the Application layer on the receiving end will remove only the header placed onto the data by the Application layer on the sending side, and so forth.


The term adjacent-layer interaction refers to the interaction between layers of the OSI model on the same host. That is, the Application layer interacts with the Presentation layer, the Presentation layer interacts with both the Application layer ( the one above it ) and the Session layer (the one below it ), and so forth.


____________________________________________