Pages

Monday 18 July 2016

Next Generation Optical Fibre in broadband

Making Your Broadband Network Go Further




WHAT ARE FIBER OPTICS:- Fiber optics are simply strands of flexible glass as thin as human hair that are used for telecommunications. These strands carry digital signals with light. Even though these cables are made of glass, they are not stiff and fragile. They can bend kind of like wires and are very strong. When hundreds or even thousands of these strands are arranged in bundles, it is called an optical cable. The glass cables are covered with a special protective coating called cladding. It is made from a material that reflects the light back into the core or center of the cable. This cladding creates a
mirror-lined wall. The final outer layer is a buffer coating to protect this special glass cable from damage and moisture. Single-mode and multi-mode are the two main types of fiber optic cable. Single-mode fiber cables send signals using laser light. They are smaller in thickness than multi-mode. Multi-mode fibers send signals using light-emitting diodes or LEDs. They are bigger in thickness or diameter than the single-mode cables. Fiber optics work using the total internal reflection principle. When light is transmitted into the glass cable, the light bounces off the reflective cladding on the sides of the glass cable, so the light can travel around corners. In other words, the light bounces off the inside of the cable until it gets to its destination. There are more parts to the fiber optic system than the cables. The first thing is the transmitter. It produces the signals that will travel through the cable. The optical regenerator is needed when the light signal is weakened by traveling over a long distance and needs a re-boost or strengthening. Actually, the light signal is copied and a new one with the same characteristics is sent by the regenerator. Finally, there is the optical receiver. It receives the light signals and encodes them into a readable form for the device at the end. Fiber optics have lots of uses. The Internet uses fiber optic cables. It is a perfect application because it is digital information and the fiber optic cables send digitally. Telephones were one of the first uses for fiber optics. Many times internet and telephone signals travel over the same cables. Digital television (cable TV) is often transmitted by fiber optic cables. Other uses are medical imaging, mechanical inspection. And for inspecting plumbing and sewer lines. Fiber optic cables without optical regenerators can be up to about one kilometer in length. With regenerators, they can go on almost forever. They can be placed in buildings, up on power lines, buried in the ground or even placed in the ocean. Fiber optic cables are not perfect; they can break. Sometimes when crews are digging, they accidentally can tear up the cables. They can be repaired using a technique called splicing. It is when a worker cuts off the broken ends and reconnects it using special adhesives, heat, or special connectors. 

WHAT ARE THE ADVANTAGES TO HAVING BROADBAND? Why are fiber-optic systems revolutionizing telecommunications? Compared to conventional metal wire (copper wire), optical fibers provide: Digital signals: Optical fibers are ideally suited for carrying digital information, which is especially useful in computer networks. Higher carrying capacity: Because optical fibers are thinner than copper wires, more fibers can be bundled into a given-diameter cable than copper wires. This allows more phone lines to go over the same cable or more channels to come through the cable into your business or home. Less signal degradation: The loss of signal in optical fiber is less than in copper wire. Less expensive: Several miles of optical cable can be made cheaper than equivalent lengths of copper wire. This saves your provider and you money. Thinner: Optical fibers can be drawn to smaller diameters than copper wire. Light signals: Unlike electrical signals in copper wires, light signals from one fiber do not interfere with those of other fibers in the same cable. This means clearer phone conversations or TV reception. Low power: Because signals in optical fibers degrade less, lower-power transmitters can be used instead of the high-voltage electrical transmitters needed for copper wires. Again, this saves your provider and you money. Non-flammable: Because no electricity is passed through optical fibers, there is no fire hazard. Lightweight: An optical cable weighs less than a comparable copper wire cable. Fiber-optic cables take up less space in the ground. Flexible: Because fiber optics are so flexible and can transmit and receive light, they are used in many flexible digital cameras for medical imaging in bronchoscopes, endoscopes, laparoscopes; for mechanical imaging used in inspecting mechanical welds in pipes and engines (in airplanes, rockets, space shuttles, cars); and in Plumbing, to inspect sewer lines Because of these advantages, you see fiber optics in many industries, most notably telecommunications and computer networks. For example, if you telephone Europe from the United States (or vice versa) and the signal is bounced off a communications satellite, you often hear an echo on the line. But with transatlantic fiber-optic cables, you have a direct connection with no echoes.

 

Next generation :-

 As a recent blockbuster video on YouTube called “A Day of Glass” demonstrates, with the inventive pace of communications technology these days, it is realistic to foresee a world where even the most humble of appliances in our homes and at work, like fridges and desktops, are fully connected and enabled as video and voice interactive devices. It is easy to see that such a world would require an unimaginable amount of bandwidth. The millions of hits that this video has had indicate a real world interest in a future that is so technology and telecoms enabled, and thus offers an explanation for, and a justification for supporting, the incessantly increasing consumer demand for bandwidth in telecoms networks of today. The global path towards offering high speed broadband via fibre to the home has, at its root, the driver that subscribers want to live in a super-connected world and so want the bandwidth to enable it. Hence the telecoms industry is striving to deliver fibre to the home and 100 G data rates to increase capacity wherever there are bottle necks, such as in the core or the metro-core. But let us reflect a little bit on the modest technology that is at the heart of this fast evolving communications industry: the optical fibre. It is a little heralded fact that fibre is fantastic, without optical fibre all of this would not be possible: while a single copper pair is capable of carrying six simultaneous phone calls, one single optical fibre, running at a modest 10 Gb/s over 64 channels, can carry over 10 million simultaneous phone calls! Hence it is fair to say that the optical fibre is the fundamental enabler of all modern day telecommunications networks. But what can the modest optical fibre do to help us enable a future ever more connected world? Trends in Broadband Networks: Optical Fibre Design for the Future The optical fibre is basically a strand of glass 125 μm in diameter that owes its performance to clever material processing and intricate refractive index profiling in the core of the strand. But all fibres are not the same: fibre performance can be altered by modifying the materials, processes, and the refractive index profile. Thus fibre optimisation for different application spaces is possible and although the G.652 standard single-mode fibre is the most common throughout the world there are other fibre types like G.655 non-zero dispersion shifted fibres and laser optimised OM3 and OM4 multimode fibres that have been optimised to deliver cost savings and performance advantages in their respective application spaces. WP2401 Issued: April 2011 Authors: Dr. Merrion Edwards and Vanesa Diaz But what about fibre optimisation in the world of next generation broadband access networks and fibre to the home? One of the well known limitations of traditional optical fibre is that when subjected to bends of less than 60 mm in diameter significant levels of light leak from the core leading to signal power loss. In addition, optical fibre always exhibits a degree of transmission loss that manifests as a reduction in the signal power level as it travels along the core due to light scattering and photon absorption.

Next Generation Fibre for Broadband Networks: Outside Plant Cabling 

The aforementioned trends in broadband networks put pressure on the fibre and system solutions. In the outside plant, classical GPON/EPON and BPON systems use wavelengths from 1290 nm to 1330 nm for upstream transmission, 1480 nm to 1500 nm for the downstream with 1550 nm being reserved for analogue video broadcast. As we move towards next generation PON systems and in particular 10 G-EPON, operators will need to use an even broader spectrum of the fibre from 1260 nm up to 1600 nm
The trend towards central office consolidation coupled with, often government driven, initiatives to serve not just the cities with fast broadband services but rural communities also, is resulting in much longer link lengths in the access network than originally conceived and potential for increased areas of no coverage (“not-spots”).

These longer link lengths put pressure on cable transmission losses which must be minimised in order to bring adequate signal levels to the customer to achieve connection. Standards for extended reach systems (e.g. Class C GPON) with higher power loss budgets have been developed but even these have their limitations in total reach and require incremental capital spend on advanced electronics. Here advances in optical fibre can help. A new wave of innovation in optical fibre at Corning Optical Fiber has delivered a portfolio of new lower loss standard single-mode G.652 fibres. One of these fibres, Corning® SMF-28e+® LL features industry leading low attenuation while being G.652.D compliant and fully backwards compatible with the ubiquitously deployed G.652.D fibres in access networks. This new low loss fibre delivers industry leading low attenuation across the newly required broad spectrum of wavelengths from 1270 nm to 1580 nm .

Next Generation Fibre for Broadband Networks Indoor Cabling:-

  It is not immediately self-evident, but the proliferation of indoor cabling standards to support competitive broadband markets via open access architecture has a striking impact on indoor optical fibre cabling performance requirements. Open access architecture requires that the broadband network architecture is configured so as to ensure that competitive service providers are not (within reason) precluded from delivering a broadband connection to a customer due to their lack of proximity to that customer or a difference in broadband transmission technology. As shown in Figure 5, all broadband links irrespective of system technology deployed will have a maximum power loss budget. If however, Provider 1, whose Point of Presence (PoP) or central office is closest to the customer, is responsible for deploying the in-building cabling, in the absence of regulation could, by installing indoor cabling with high signal power loss, due to the fixed power loss budget available, prevent Providers 2 and 3 from delivering a connection to the customer due to the additional distance that they
1.Comparing SMF-28e+ LL with an attenuation of 0.32 dB/km at 1310 nm, to a typical fibre with an attenuation of 0.35 dB/km results in extension of the maximum reach of a typical VDSL feeder cable from 10 km to circa 11 km, or extension of an FTTH feeder link from 19 km to almost 21 km. This increase in feeder cable length increases the area that a central office can cover by almost 20%. have to cover (with associated additional power loss) to reach the customer. If however, Provider 1 is required by open access regulation and associated indoor cabling standards to minimise signal power loss in the indoor cabling, then there should be sufficient power loss budget remaining to enable Providers 2 and 3 to reach the customer. To date open access architecture initiatives in Europe have resulted in such indoor cabling standards being put in place in Germany (max 1.2 dB loss indoor), France (1.5 – 2 dB loss indoor) and Switzerland (0.9 dB max).
2.It is worth noting that such tight control of indoor power loss is also beneficial for carriers operating outside of open access architecture regulation. For such carriers, tight control over indoor power loss frees up additional power loss budget for the complete link from central office (PoP) to the subscriber that can be used to provide technology robustness to their network: with extra power loss budget being available to facilitate perhaps a future upgrade to higher data rates or central office consolidation where the operators average link lengths will naturally increase. But what does all this mean for the fibre? If we consider a 1.2 dB indoor cabling power budget, we can see from Figure 6 that this budget is readily consumed by a basic installation involving 50 m of cable, three splices and two connectors (maximum values considered). But when fibre cables enter a building, the cables begin to be installed in a new way. This new cable environment requires that the cables are smaller to be aesthetically pleasing and also need to be routed in very tight spaces. As a consequence installation of indoor cabling naturally involves a number of tight bends. Tight bends in traditional optical fibre introduce significant signal power loss. But indoor cabling standards and a desire to minimise indoor cabling loss leave little head room for bend loss. Hence we need a fibre that is insensitive to bend.

Recognising this, the optical fibre industry responded with the G.657 standard of fibres. Within that standard there are three classifications of performance, G.657.A1, G.657.A2 and G.657.A32 /B3. The question is which G.657 fibre is most suitable? Within any indoor cabling installation it is a fair assumption that at least four tight 90º degree bends will occur (at circa 7.5 mm in radius and equivalent to one full 360º turn). If we compare the bend loss performance of G.657.A1, G.657.A2 and G.657.A3 fibres we can see from Figure 7 that it is only G.657.A3 that effectively gives us near zero bend loss and so ensures that we maintain very low indoor cabling loss to achieve compliance with the indoor cabling standards and provide technology robustness for the network. G.657.A3/B3 fibres, like Corning® ClearCurve® ZBL fibre, with their near zero bend loss performance are true bend insensitive fibres and are yet another example of how innovation can take the already fantastic optical fibre and make it even better. Use of G.657.A3/B3 fibres not only enable compliance with indoor cabling standards but have other just as significant network reliability and subscriber revenue protection benefits. Bringing optical fibre cabling indoors brings with it a heightened risk of public intervention, such that the introduction of accidental bends during the cable lifetime becomes likely. For all fibres other than G.657.A3/B3 fibre such bends will probably lead to excessive signal power loss resulting in disconnection of the customer. Repair of customer disconnections drives higher Opex and reduces customer satisfaction. The latter increases customer churn and can result in a significant reduction in subscriber revenues. Only G.657.A3/B3 fibre, like Corning® ClearCurve® ZBL fibre, with its innovative bend insensitive design can provide the maximum protection of customer connection satisfaction to reduce Opex and protect subscriber revenues while also fulfilling the effectively zero bend loss requirements of open access architecture driven indoor cabling standards. In this sense the innovation of bend insensitive optical fibre also enables your broadband network to go further and deliver more for your business.

Conclusion:-
If we reflect again on the direction our world is going in terms of connectivity, there seems no end to the bandwidth requirements. Hence it is comforting to know that the technology that opened the door to this highly connected world in the first place, the optical fibre, keeps on delivering new advances like low loss Corning® SMF-28e+® LL fibre and bend insensitive Corning® ClearCurve® ZBL fibre to make your broadband go further and deliver with each step another step closer to that exciting future of a super-connected world.



 
Read Full

Friday 15 July 2016

WIFI is not getting connected although there is no issue with Network

WIFI is not getting connected although there is no issue with Network



Generally peoples getting issue with wifi many times due to different different reasons among them one is Antenna issue.

If you are a person who has the Hardware knowledge then you will know there is two or three wires connected to WIFI card which is inside of laptop or latest slim panel desktop.laterally we will get two cables connected to WIFI card one is white and another one is black colored cable.
The white one is the main Antenna and black cable consists for auxiliary antenna.if these two cables are not connected properly then we will not get wifi also.so we have to trouble shoot some basic things first like

1.On and off network.
2.Disconnect and reconnect the network.
3.Disable and enable network adapter.
4.Try to connect another network.
5.Restart system.
6.Updating drivers.
7.Update Bios version.
Then after if you are not able to connect the wifi then remove the back  panel of laptop where having the wifi card and try to clean once the card with the help of eraser and unplug the antenna connector  if it is removed previously then connect it properly then try to connect wifi i think your problem is resolved.if not please write your comment i gill get back to you soon with solution ...........................
Read Full

Friday 29 April 2016

Some points about RAID

RAID(redundant array of inexpensive disks, now commonly redundant array of independent disks):-

RAID is a technology that is used to increase the performance and/or reliability of data storage. The abbreviation stands for Redundant Array of Inexpensive Disks. A RAID system consists of two or more drives working in parallel. These disks can be hard discs, but there is a trend to also use the technology for SSD (solid state drives). There are different RAID levels, each optimized for a specific situation. These are not standardized by an industry group or standardization committee. This explains why companies sometimes come up with their own unique numbers and implementations. This article covers the following RAID levels:
  • RAID 0 – striping
  • RAID 1 – mirroring
  • RAID 5 – striping with parity
  • RAID 6 – striping with double parity
  • RAID 10 – combining mirroring and striping
The software to perform the RAID-functionality and control the drives can either be located on a separate controller card (a hardware RAID controller) or it can simply be a driver. Some versions of Windows, such as Windows Server 2012 as well as Mac OS X, include software RAID functionality. Hardware RAID controllers cost more than pure software, but they also offer better performance, especially with RAID 5 and 6.
RAID-systems can be used with a number of interfaces, including SCSI, IDE, SATA or FC (fibre channel.) There are systems that use SATA disks internally, but that have a FireWire or SCSI-interface for the host system.
Sometimes disks in a storage system are defined as JBOD, which stands for ‘Just a Bunch Of Disks’. This means that those disks do not use a specific RAID level and acts as stand-alone disks. This is often done for drives that contain swap files or spooling data.
Below is an overview of the most popular RAID levels:

RAID level 0 – Striping

In a RAID 0 system data are split up in blocks that get written across all the drives in the array. By using multiple disks (at least 2) at the same time, this offers superior I/O performance. This performance can be enhanced further by using multiple controllers, ideally one controller per disk.


Advantages

  • RAID 0 offers great performance, both in read and write operations. There is no overhead caused by parity controls.
  • All storage capacity is used, there is no overhead.
  • The technology is easy to implement.

Disadvantages

  • RAID 0 is not fault-tolerant. If one drive fails, all data in the RAID 0 array are lost. It should not be used for mission-critical systems.

Ideal use

RAID 0 is ideal for non-critical storage of data that have to be read/written at a high speed, such as on an image retouching or video editing station.
If you want to use RAID 0 purely to combine the storage capacity of twee drives in a single volume, consider mounting one drive in the folder path of the other drive. This is supported in Linux, OS X as well as Windows and has the advantage that a single drive failure has no impact on the data of the second disk or SSD drive.

RAID level 1 – Mirroring

Data are stored twice by writing them to both the data drive (or set of data drives) and a mirror drive (or set of drives) . If a drive fails, the controller uses either the data drive or the mirror drive for data recovery and continues operation. You need at least 2 drives for a RAID 1 array.


Advantages

  • RAID 1 offers excellent read speed and a write-speed that is comparable to that of a single drive.
  • In case a drive fails, data do not have to be rebuild, they just have to be copied to the replacement drive.
  • RAID 1 is a very simple technology.

Disadvantages

  • The main disadvantage is that the effective storage capacity is only half of the total drive capacity because all data get written twice.
  • Software RAID 1 solutions do not always allow a hot swap of a failed drive. That means the failed drive can only be replaced after powering down the computer it is attached to. For servers that are used simultaneously by many people, this may not be acceptable. Such systems typically use hardware controllers that do support hot swapping.

Ideal use

RAID-1 is ideal for mission critical storage, for instance for accounting systems. It is also suitable for small servers in which only two data drives will be used.

RAID level 5

RAID 5 is the most common secure RAID level. It requires at least 3 drives but can work with up to 16. Data blocks are striped across the drives and on one drive a parity checksum of all the block data is written. The parity data are not written to a fixed drive, they are spread across all drives, as the drawing below shows. Using the parity data, the computer can recalculate the data of one of the other data blocks, should those data no longer be available. That means a RAID 5 array can withstand a single drive failure without losing data or access to data. Although RAID 5 can be achieved in software, a hardware controller is recommended. Often extra cache memory is used on these controllers to improve the write performance.


Advantages

  • Read data transactions are very fast while write data transactions are somewhat slower (due to the parity that has to be calculated).
  • If a drive fails, you still have access to all data, even while the failed drive is being replaced and the storage controller rebuilds the data on the new drive.

Disadvantages

  • Drive failures have an effect on throughput, although this is still acceptable.
  • This is complex technology. If one of the disks in an array using 4TB disks fails and is replaced, restoring the data (the rebuild time) may take a day or longer, depending on the load on the array and the speed of the controller. If another disk goes bad during that time, data are lost forever.

Ideal use

RAID 5 is a good all-round system that combines efficient storage with excellent security and decent performance. It is ideal for file and application servers that have a limited number of data drives.

RAID level 6 – Striping with double parity

RAID 6 is like RAID 5, but the parity data are written to two drives. That means it requires at least 4 drives and can withstand 2 drives dying simultaneously. The chances that two drives break down at exactly the same moment are of course very small. However, if a drive in a RAID 5 systems dies and is replaced by a new drive, it takes hours to rebuild the swapped drive. If another drive dies during that time, you still lose all of your data. With RAID 6, the RAID array will even survive that second failure.

Advantages

  • Like with RAID 5, read data transactions are very fast.
  • If two drives fail, you still have access to all data, even while the failed drives are being replaced. So RAID 6 is more secure than RAID 5.

Disadvantages

  • Write data transactions are slowed down due to the parity that has to be calculated.
  • Drive failures have an effect on throughput, although this is still acceptable.
  • This is complex technology. Rebuilding an array in which one drive failed can take a long time.

Ideal use

RAID 6 is a good all-round system that combines efficient storage with excellent security and decent performance. It is preferable over RAID 5 in file and application servers that use many large drives for data storage.

RAID level 10 – combining RAID 1 & RAID 0

It is possible to combine the advantages (and disadvantages) of RAID 0 and RAID 1 in one single system. This is a nested or hybrid RAID configuration. It provides security by mirroring all data on secondary drives while using striping across each set of drives to speed up data transfers.

Advantages

  • If something goes wrong with one of the disks in a RAID 10 configuration, the rebuild time is very fast since all that is needed is copying all the data from the surviving mirror to a new drive. This can take as little as 30 minutes for drives of  1 TB.

Disadvantages

  • Half of the storage capacity goes to mirroring, so compared to large RAID 5  or RAID 6 arrays, this is an expensive way to have redundancy.

What about RAID levels 2, 3, 4 and 7?

These levels do exist but are not that common (RAID 3 is essentially like RAID 5 but with the parity data always written to the same drive). This is just a simple introduction to RAID-systems. You can find more in-depth information on the pages of wikipedia or ACNC.
Other RAID Levels There are other RAID levels: 2, 3, 4, 7, 0+1...but they are really variants of the main RAID configurations already mentioned, and they're used for specific cases. Here are some short descriptions of each:
•RAID 2 is similar to RAID 5, but instead of disk striping using parity, striping occurs at the bit-level. RAID 2 is seldom deployed because costs to implement are usually prohibitive (a typical setup requires 10 disks) and gives poor performance with some disk I/O operations.
•RAID 3 is also similar to RAID 5, except this solution requires a dedicated parity drive. RAID 3 is seldom used except in the most specialized database or processing environments, which can benefit from it.
•RAID 4 is a configuration in which disk striping happens at the byte level, rather than at the bit-level as in RAID 3.
•RAID 7 is a proprietary level of RAID owned by the now-defunct Storage Computer Corporation.
•RAID 0+1 is often interchanged for RAID 10 (which is RAID 1+0), but the two are not same. RAID 0+1 is a mirrored array with segments that are RAID 0 arrays. It's implemented in specific infrastructures requiring high performance but not a high level of scalability.
For most small- to midsize-business purposes, RAID 0, 1, 5 and in some cases 10 suffice for good fault tolerance and performance. For most home users, RAID 5 may be overkill, but RAID 1 mirroring provides decent fault tolerance.
It's important to remember that RAID is not backup, nor does it replace a backup strategy—preferably an automated one. Backing up to a RAID device might well be a part of such a strategy. Owning a RAID-enabled device, which you use as your primary server or storage device, is not. RAID can be a great way to optimize NAS and server performance and quickly recover from hardware failure, but it's only part of an overall



RAID is no substitute for back-up!

All RAID levels except RAID 0 offer protection from a single drive failure. A RAID 6 system even survives 2 disks dying simultaneously. For complete security you do still need to back-up the data from a RAID system.
  • That back-up will come in handy if all drives fail simultaneously because of a power spike.
  • It is a safeguard when the storage system gets stolen.
  • Back-ups can be kept off-site at a different location. This can come in handy if a natural disaster or fire destroys your workplace.
  • The most important reason to back-up multiple generations of data is user error. If someone accidentally deletes some important data and this goes unnoticed for several hours, days or weeks, a good set of back-ups ensure you can still retrieve those files.
Which RAID Is Right for Me?
As mentioned, there are several RAID levels, and the one you choose depends on whether you are using RAID for performance or fault tolerance (or both). It also matters whether you have hardware or software RAID, because software supports fewer levels than hardware-based RAID. In the case of hardware RAID, the type of controller you have matters, too. Different controllers support different levels of RAID and also dictate the kinds of disks you can use in an array: SAS, SATA or SSD.




Read Full

Tuesday 26 April 2016

Virus !!!! and its effects.....

virus and its precautions :- 

In computers, a virus is a program or programming code that replicates by being copied or initiating its

copying to another program, computer boot sector or document. Viruses can be transmitted as attachments to an e-mail note or in a downloaded file, or be present on a diskette or CD. The immediate source of the e-mail note, downloaded file, or diskette you've received is usually unaware that it contains a virus. Some viruses wreak their effect as soon as their code is executed; other viruses lie dormant until circumstances cause their code to be executed by the computer. Some viruses are benign or playful in intent and effect ("Happy Birthday, Ludwig!") and some can be quite harmful, erasing data or causing your hard disk to require reformatting. A virus that replicates itself by resending itself as an e-mail attachment or as part of a network message is known as a worm.

Generally, there are three main classes of viruses:-


File infectors. Some file infector viruses attach themselves to program files, usually selected .COM or .EXE files. Some can infect any program for which execution is requested, including .SYS, .OVL, .PRG, and .MNU files. When the program is loaded, the virus is loaded as well. Other file infector viruses arrive as wholly-contained programs or scripts sent as an attachment to an e-mail note.
System or boot-record infectors. These viruses infect executable code found in certain system areas on a disk. They attach to the DOS boot sector on diskettes or the Master Boot Record on hard disks. A typical scenario (familiar to the author) is to receive a diskette from an innocent source that contains a boot disk virus. When your operating system is running, files on the diskette can be read without triggering the boot disk virus. However, if you leave the diskette in the drive, and then turn the computer off or reload the operating system, the computer will look first in your A drive, find the diskette with its boot disk virus, load it, and make it temporarily impossible to use your hard disk. (Allow several days for recovery.) This is why you should make sure you have a bootable floppy.
Macro viruses. These are among the most common viruses, and they tend to do the least damage. Macro viruses infect your Microsoft Word application and typically insert unwanted words or phrases.The best protection against a virus is to know the origin of each program or file you load into your computer or open from your e-mail program. Since this is difficult, you can buy anti-virus software that can screen e-mail attachments and also check all of your files periodically and remove any viruses that are found. From time to time, you may get an e-mail message warning of a new virus. Unless the warning is from a source you recognize, chances are good that the warning is a virus hoax.

1. Resident Viruses

This type of virus is a permanent which dwells in the RAM memory. From there it can overcome and interrupt all of the operations executed by the system: corrupting files and programs that are opened, closed, copied, renamed etc.

Examples include: Randex, CMJ, Meve, and MrKlunky.

2. Multipartite Viruses

Multipartite viruses are distributed through infected media and usually hide in the memory. Gradually, the virus moves to the boot sector of the hard drive and infects executable files on the hard drive and later across the computer system.

3. Direct Action Viruses

The main purpose of this virus is to replicate and take action when it is executed. When a specific condition is met, the virus will go into action and infect files in the directory or folder that it is in and in directories that are specified in the AUTOEXEC.BAT file PATH. This batch file is always located in the root directory of the hard disk and carries out certain operations when the computer is booted.

4. Overwrite Viruses

Virus of this kind is characterized by the fact that it deletes the information contained in the files that it infects, rendering them partially or totally useless once they have been infected.

The only way to clean a file infected by an overwrite virus is to delete the file completely, thus losing the original content.

Examples of this virus include: Way, Trj.Reboot, Trivial.88.D.

5. Boot Virus

This type of virus affects the boot sector of a floppy or hard disk. This is a crucial part of a disk, in which information on the disk itself is stored together with a program that makes it possible to boot (start) the computer from the disk.

The best way of avoiding boot viruses is to ensure that floppy disks are write-protected and never start your computer with an unknown floppy disk in the disk drive.

Examples of boot viruses include: Polyboot.B, AntiEXE.

6. Macro Virus

Macro viruses infect files that are created using certain applications or programs that contain macros. These mini-programs make it possible to automate series of operations so that they are performed as a single action, thereby saving the user from having to carry them out one by one.

Examples of macro viruses: Relax, Melissa.A, Bablas, O97M/Y2K.

7. Directory Virus

Directory viruses change the paths that indicate the location of a file. By executing a program (file with the extension .EXE or .COM) which has been infected by a virus, you are unknowingly running the virus program, while the original file and program have been previously moved by the virus.

Once infected it becomes impossible to locate the original files.

8. Polymorphic Virus

Polymorphic viruses encrypt or encode themselves in a different way (using different algorithms and encryption keys) every time they infect a system.

This makes it impossible for anti-viruses to find them using string or signature searches (because they are different in each encryption) and also enables them to create a large number of copies of themselves.

Examples include: Elkern, Marburg, Satan Bug, and Tuareg.

9. File Infectors

This type of virus infects programs or executable files (files with an .EXE or .COM extension). When one of these programs is run, directly or indirectly, the virus is activated, producing the damaging effects it is programmed to carry out. The majority of existing viruses belongs to this category, and can be classified depending on the actions that they carry out.

10. Encrypted Viruses

This type of viruses consists of encrypted malicious code, decrypted module. The viruses use encrypted code technique which make antivirus software hardly to detect them. The antivirus program usually can detect this type of viruses when they try spread by decrypted themselves.

11. Companion Viruses

Companion viruses can be considered file infector viruses like resident or direct action types. They are known as companion viruses because once they get into the system they "accompany" the other files that already exist. In other words, in order to carry out their infection routines, companion viruses can wait in memory until a program is run (resident viruses) or act immediately by making copies of themselves (direct action viruses).

Some examples include: Stator, Asimov.1539, and Terrax.1069

12. Network Virus

Network viruses rapidly spread through a Local Network Area (LAN), and sometimes throughout the internet. Generally, network viruses multiply through shared resources, i.e., shared drives and folders. When the virus infects a computer, it searches through the network to attack its new potential prey. When the virus finishes infecting that computer, it moves on to the next and the cycle repeats itself.

The most dangerous network viruses are Nimda and SQLSlammer.

13. Nonresident Viruses

This type of viruses is similar to Resident Viruses by using replication of module. Besides that, Nonresident Viruses role as finder module which can infect to files when it found one (it will select one or more files to infect each time the module is executed).

14. Stealth Viruses

Stealth Viruses is some sort of viruses which try to trick anti-virus software by intercepting its requests to the operating system. It has ability to hide itself from some antivirus software programs. Therefore, some antivirus program cannot detect them.

15. Sparse Infectors

In order to spread widely, a virus must attempt to avoid detection. To minimize the probability of its being discovered a virus could use any number of different techniques. It might, for example, only infect every 20th time a file is executed; it might only infect files whose lengths are within narrowly defined ranges or whose names begin with letters in a certain range of the alphabet. There are many other possibilities.

16. Spacefiller (Cavity) Viruses

Many viruses take the easy way out when infecting files; they simply attach themselves to the end of the file and then change the start of the program so that it first points to the virus and then to the actual program code. Many viruses that do this also implement some stealth techniques so you don't see the increase in file length when the virus is active in memory.

A spacefiller (cavity) virus, on the other hand, attempts to be clever. Some program files, for a variety of reasons, have empty space inside of them. This empty space can be used to house virus code. A spacefiller virus attempts to install itself in this empty space while not damaging the actual program itself. An advantage of this is that the virus then does not increase the length of the program and can avoid the need for some stealth techniques. The Lehigh virus was an early example of a spacefiller virus.

17. FAT Virus

The file allocation table or FAT is the part of a disk used to connect information and is a vital part of the normal functioning of the computer. 

This type of virus attack can be especially dangerous, by preventing access to certain sections of the disk where important files are stored. Damage caused can result in information losses from individual files or even entire directories.

18. Worms
A worm is technically not a virus, but a program very similar to a virus; it has the ability to self-replicate, and can lead to negative effects on your system and most importantly they are detected and eliminated by antiviruses.

Examples of worms include: PSWBugbear.B, Lovgate.F, Trile.C, Sobig.D, Mapson.

19. Trojans or Trojan Horses

Another unsavory breed of malicious code (not a virus as well) are Trojans or Trojan horses, which unlike viruses do not reproduce by infecting other files, nor do they self-replicate like worms.

20. Logic Bombs

They are not considered viruses because they do not replicate. They are not even programs in their own right but rather camouflaged segments of other programs.

Their objective is to destroy data on the computer once certain conditions have been met. Logic bombs go undetected until launched, and the results can be destructive.

How to secure your system From virus:-

Antivirus software:
Using an outdated antivirus or using an antivirus which are not good enough to take care of your laptop from evil viruses, you might be keeping your data security in danger. First step would be use any of the best and effective antivirus and always update your antivirus.
Firewall:
If you are using default windows firewall and think you are secure, than think again. The fact is most of Trojan and spies when binned with latest binder are efficient enough to dodge your latest and updated antivirus. Firewall comes in handy to block such program to opening any Internet connection. There are plenty of free firewall to choose among. One of them is comodo firewall. Keep an updated version of Firewall and monitor all your internet activity.
Update all your software :
Usually patches and updates for software are rolled out to add extra features and remove any previous bugs. Either keep option for automatic updates on for all the software or use a freeware like Filehippo update checker to check for software updates.
Proper shut down :
Windows/Linux and Mac has a dedicated button to shut down your Computer. Use it to shut down your system else you might end up creating lots of dump files and corrupted software. 
Use your browser's privacy settings:
Being aware of how websites might use your private information is important to help prevent targeted advertising, fraud, and identity theft. If you're using Internet Explorer, you can adjust your Privacy settings or restore the default settings whenever you want. For details, see Change Internet Explorer Privacy settings. 
Turn on User Account Control (UAC):
When changes are going to be made to your computer that require administrator-level permission, UAC notifies you and gives you the opportunity to approve the change. UAC can help keep viruses from making unwanted changes. To learn more about enabling UAC and adjusting the settings, see Turn User Account Control on or off.

Install tracking software :

Laptop are portable and chances of loosing them while traveling is high. Use a Laptop tracking software to track your stolen laptop. One of such software is Adeona.It’s an open source software and free to use.
Backup your files at regular Interval:
Precaution is better then cure. Same thing goes here. Make a practice to take a backup of your files using a portable hard disk or some online storage. One of such free online storage program is Skydrive which offers 25Gb of free online storage.
Use a password for login:
Most of laptop users tends to avoid using password for their laptops and this is like leaving a big security loop hole. Any intruder can take control of your system when you connect to a network. Even the system is hacked, hacker have easy access to all your data files. So use a strong password to ensure your laptop safety. Always try to use a complex password using Alphabet, numeric and special characters.
Defragmentation:
Windows system has a inbuilt feature call Defragmentation which keep all the scattered file organized. You can access Defragmentation feature under Mycomputer>Manage>Defragmentation.
Make sure you follow these few tips to ensure the security and safety of your laptop.
Avoid Shady Web Sites – If you need to look at porn, then make sure you do it in a virtual environment. You are DEFINITELY going to get a virus or spyware if you browse porn sites on your computer. Virtualization basically allows you to run programs like Internet Explorer in a virtual environment that does not effect your current operating system. If you want to find out more, search for “Virtual PC” or “VM Ware” in Google. Otherwise, simply avoid going to shady web sites

 


Read Full

Monday 25 April 2016

Why RAM is needed in computer & What is the work of memory???

MEMORY

Memory is major part of computers that categories into several types. Memory is best storage part to the computer users to save information, programs and etc, The computer memory offer several kinds of storage media some of them can store data temporarily and some them can store permanently. Memory consists of instructions and the data saved into computer through Central Processing Unit (CPU).

Computers don't remember or forget things the way that human brains do. Computers work in binary (explained more fully in the box below): they either know something or they don't—and once they've learned, barring some sort of catastrophic failure, they generally don't forget. Humans are different. We can recognize things ("I've seen that face before somewhere") or feel certain that we know something ("I remember learning the German word for cherry when I was at school") without necessarily being able to recollect them. Unlike computers, humans can forget... remember... forget... remember... making memory seem more like art or magic than science or technology. When clever people master tricks that allow them to memorize thousands of pieces of information, they're celebrated like great magicians—even though what they've achieved is far less impressive than anything a five-dollar, USB flash memory stick could do!

Types of Computer Memory:

Memory is the best essential element of a computer because computer can’t perform simple tasks. The performance of computer mainly based on memory and CPU. Memory is internal storage media of computer that has several names such as majorly categorized into two types, Main memory and Secondary memory.
1. Primary Memory / Volatile Memory.
2. Secondary Memory / Non Volatile Memory.

1. Primary Memory / Volatile Memory:

When many people hear the term 'primary storage,' they think of the hard drive on the computer - where you store or save your data. But primary storage is actually where the data you are actively using is being stored. In other words, whatever you are working on at the moment is being held in primary storage. Primary Memory also called as volatile memory because the memory can’t store the data permanently. Primary memory select any part of memory when user want to save the data in memory but that may not be store permanently on that location. It also has another name i.e. RAM.



Random Access Memory (RAM):

The primary storage is referred to as random access memory (RAM) due to the random selection of memory locations. It performs both read and write operations on memory. If power failures happened in systems during memory access then you will lose your data permanently. So, RAM is volatile memory. RAM categorized into two types.
  • Static RAM
  • Dynamic RAM

Dynamic RAM : loses its stored information in a very short time (for milli sec.) even when power supply is on. D-RAM’s are cheaper & lower.
Similar to a microprocessor chip is an Integrated Circuit (IC) made of millions of transistors and capacitors.
In the most common form of computer memory, Dynamic Memory Cell, represents a single bit of data. The capacitor holds the bit of information – a 0 or a 1. The transistor acts as a switch that lets the control circuitry on the memory chip read the capacitor or change its state. A capacitor is like a small bucket that is able to store electrons. To store a 1 in the memory cell, the bucket is filled with electrons.


To store a 0, it is emptied. The problem with the capacitor’s bucket is that it has a leak. In a matter of a few milliseconds a full bucket becomes empty. Therefore, for dynamic memory to work, either the CPU or the Memory Controller has to come along and recharge all of the capacitors holding it before they discharge. To do this, the memory controller reads the memory and then writes it right back. This refresh operation happens automatically thousands of times per second.


This refresh operation is where dynamic RAM gets its name. Dynamic RAM has to be dynamically refreshed all of the time or it forgets what it is holding. The downside of all of this refreshing is that it takes time and slows down the memory.


Static RAM uses a completely different technology. S-RAM retains stored information only as long as the power supply is on. Static RAM’s are costlier and consume more power. They have higher speed than D-RAMs. They store information in Hip-Hope.


In static RAM, a form of flip flop holds each bit of memory. A flip-flop for a memory cell takes four or six transistors along with some wiring, but never has to be refreshed. This makes static RAM significantly faster than dynamic RAM. However, because it has more parts, a static memory cell takes up a lot more space on a chip than a dynamic memory cell. Therefore, you get less memory per chip, and that makes static RAM a lot more expensive. Static RAM is fast and expensive, and dynamic RAM is less expensive and slower. Static RAM is used to create the CPU’s speed sensitive cache, while dynamic RAM forms the larger system RAM space.


EDO (Extended Data Output) RAM : In an EDO RAMs, any memory location can be accessed. Stores 256 bytes of data information into latches. The latches hold next 256 bytes of information so that in most programs, which are sequentially executed, the data are available without wait states.


SDRAM (Synchronous DRAMS), SGRAMs (Synchronous Graphic RAMs) These RAM chips use the same clock rate as CPUuses. They transfer data when the CPU expects them to be ready.
DDR-SDRAM (Double Data Rate – SDRAM) : This RAM transfers data on both edges of the clock. Therefore the transfer rate of the data becomes doubles.


ROM : Read only memory: Its non volatile memory, ie, the information stored in it, is not lost even if the power supply goes off. It’s used for the permanent storage of information. It also posses random access property. Information can not be written into a ROM by the users/programmers. In other words the contents of ROMs are decided by the manufactures.


The following types of ROM


(i) PROM : It’s programmable ROM. Its contents are decided by the user. The user can store permanent programs, data etc in a PROM. The data is fed into it using a PROM programs.


(ii) EPROM : An EPROM is an erasable PROM. The stored data in EPROM’s can be erased by exposing it to UV light for about 20 min. It’s not easy to erase it because the EPROM IC has to be removed from the computer and exposed to UV light. The entire data is erased and not selected portions by the user. EPROM’s are cheap and reliable.


(iii) EEPROM (Electrically Erasable PROM) : The chip can be erased & reprogrammed on the board easily byte by byte. It can be erased with in a few milliseconds. There is a limit on the number of times the EEPROM’s can be reprogrammed, i.e.; usually around 10,000 times.


Flash Memory : Its an electrically erasable & programmable permanent type memory. It uses one transistor memory all resulting in high packing density, low power consumption, lower cost & higher reliability. Its used in all power, digital cameras, MP3 players etc.



2. Secondary Memory / Non Volatile Memory:

Secondary memory is external and permanent memory that is useful to store the external storage media such as floppy disk, magnetic disks, magnetic tapes and etc cache devices. Secondary memory deals with following types of components. This is also called Mass Storage, Auxiliary Memory and External Memory. This memory is slower than the Main memory as it involves mechanical motion techniques during storage and retrieval of data. This memory is larger in size than Main memory but the processor is unable to access it directly due to its offline link with the processor. This means that the data from secondary storage must be loaded into RAM before the processor starts processing it. The main memory links the secondary memory to the processor.

Read Only Memory (ROM) :

ROM is permanent memory location that offer huge types of standards to save data. But it work with read only operation. No data lose happen whenever power failure occur during the ROM memory work in computers.
ROM memory has several models such names are following.
1. PROM: Programmable Read Only Memory (PROM) maintains large storage media but can’t offer the erase features in ROM. This type of RO maintains PROM chips to write data once and read many. The programs or instructions designed in PROM can’t be erased by other programs.
2. EPROM : Erasable Programmable Read Only Memory designed for recover the problems of PROM and ROM. Users can delete the data of EPROM thorough pass on ultraviolet light and it erases chip is reprogrammed.
3. EEPROM: Electrically Erasable Programmable Read Only Memory similar to the EPROM but it uses electrical beam for erase the data of ROM.
Cache Memory: Mina memory less than the access time of CPU so, the performance will decrease through less access time. Speed mismatch will decrease through maintain cache memory. Main memory can store huge amount of data but the cache memory normally kept small and low expensive cost. All types of external media like Magnetic disks, Magnetic drives and etc store in cache memory to provide quick access tools to the users.

 Magnetic Disks

Speedy access to data, relatively low cost, and the ability to erase and rewrite data make magnetic disks the most widely used storage media on today’s computers. With magnetic disk storage systems, data are written by read/write heads magnetizing the particles a certain way on a medium surface. The particles retain their magnetic orientation so they can be read at a later time, and rewriting to the medium is possible. There are two main types of magnetic disks:

 Floppy Disk:

Floppy Disk is a round, flat piece of Mylar coated with ferric oxide, rust like substance containing tiny particles capable of holding a magnetic field, and encased in a protective plastic cover, the disk jacket. Data is stored on a floppy disk by the disk drive's read/write head, which alters the magnetic orientation of the particles. Orientation in one direction represents binary 1; orientation in the other, binary 0. Typically, a floppy disk is 5.25 inches in diameter, with a large hole in the center that fits around the spindle in the disk drive. Depending on its capacity, such a disk can hold from a few hundred thousand to over one million bytes of data. A 3.5-inch disk encased in rigid plastic is usually called a microfloppy disk but can also be called a floppy disk.

Hard Disk:

Hard Disk composed of one or more platters that are permanently sealed within a hard metallic casing. These hard disks are fixed in the computer CPU and are seldom transferred from one computer to another. For the better use of the hard disk space, a hard disk can be divided into any number of partitions like C: D: E: etc. however making too many partitions is not a good management practice for the memory of hard disk.Now days up to 2000 GB hard disks are available in the market. For the better use of disk space a hard disk can be divided into a number of partitions like C: D: E: etc.

Magnetic Tapes:

Magnetic tape and the tape drives are analogous to a home tape recorder system. It uses the same reading and recording techniques as that of the magnetic disk as the medium used in it is a flexible tape that is coated with magnetic oxide.
Since sequential access device means that for n records, where n = 0, 1, 2, 3, ……… if the tape head is positioned at record number 1 then in order to read the nth record, it is necessary to read all the physical records from 1st to nth records one at a time. If the head position is beyond the desired record, it is necessary to rewind the tape for a specific distance and begin reading forward.
In contrast to the magnetic disk, which is a direct access device, a tape is sequential in nature. A disk drive doesn’t read all the sectors on a disk sequentially to get to the desired record, where as magnetic tape drive read all the sectors b/w the starting and the desired location of data. Magnetic tape was the first kind of secondary memory and is still widely used for its lowest cost, however it is very slow in speed than all of the secondary storage devices.

Features

Secondary Storage is magnetic in nature and therefore magnetic mechanisms are used to store data permanently. Data or information is stored in the form of files. A file is an area of the secondary memory where data or information is permanently stored. Each file has its unique file name through which it is accessed. The storage of data in secondary memory follows some file organization techniques such as Sequential, Indexed Sequential and Random/Direct access file organizations. Sequential access file organization is adopted for Magnetic Tape while Random/Direct access file organization is more suitable for Hard Disk or Floppy Disk.

Advantages:

  1. Data remains permanently stored even when the computer is switched off.
  2. This data remains in the memory until deleted by the computer user.
  3. Very high volumes of data can be recorded for long time and is updated and retrieved efficiently.
  4. Transfer of data from one computer to another is performed through this memory like through Floppy or CDs.
  5. System files associated with any Operating System are permanently stored in this memory. These files are loaded into RAM at the time of booting the computer system.
  6. To prevent any damage and loss of data, Backup and Recovery procedures are facilitated through Secondary Memory.

Optical Memory

Optical memory is used for storing large volumes of data like sound, text, graphics, and videos. An optical disk is a removable disk that uses laser to read and write data. With an optical disk, there is no mechanical arm, as with floppy disks and hard disks. Instead a high-power laser beam is used to write data by burning tiny pits into the surface of a hard plastic disk. To read the data, a low-power laser light scans the disk surface: pitted areas are not reflected and are interpreted as 0 bits; smooth areas are reflected and are interpreted as 1 bits. Because the pits are so tiny, a great deal more data can be represented than is possible in the same amount of space on hard disks. An optical disk can hold over 4.7 gigabytes of data, the equivalent of 1 million type-written pages.
The optical memory devices are:

Compact Disk (CD)

CD is a non-erasable disk that stores the digitized audio information. The standard system uses 12 cm disks and they can record more than 60 minutes of playing time without any interruption.
CD-ROM
Optical disk form of secondary storage that is used to hold prerecorded text, graphics and sound. Like music CDs a CD-ROM is a read-only disk. Read Only means the disk’s content is recorded at the time of manufacture and can not be written on or erased by the user. A CD-ROM disk can hold up to 650 MB of data, equal to 300,000 pages of text.

CR-RW

CD-RW (Compact Disk-Rewritable) also called as Erasable Optical Disk allow users to record and erase data so that the disk can be used over and over again. Special CD-RW drives and software is required.

DVD (Digital Versatile Disk) : the “Digital Convergence” Disk

The DVD represents a new generation of high density CD-ROM disks, which are read by laser and which have both write-once and rewritable capabilities. According to the various industries sponsoring it, DVD stands for either “Digital Video Disk” or “Digital Versatile Disk”, and it is a CD type disk with extremely high capacity, able to store 4.7-17 GB.

DVD-R

DVD disks that allow one time recording by the consumer. Two types of reusable disks are DVD-RW (DVD Rewritable) and DVD-RAM (DVD Random Access Memory), both of which can be recorded on and erased more than once.

Write Once Read Many (WORM)

WORM is a disk that is more easily written than CD-ROM thus making single copy disks commercially feasible. After performing the write operation the disk is read only. The most popular size is 51/4 “that can hold data from 200 to 800 MB.

Magneto-Optical Disk

There are a few other types of storage systems that use a combination of magnetic and optical technology – the magneto-optical disk is one of them. M-O disks can store up to 5.2 GB of data.
A very common application of Optical Memory especially CD-ROM is that it can store Multimedia Encyclopedia that contains all 21 volumes of the Academic American Encyclopedia. This encyclopedia comprises full text of about 33000 as well as a comprehensive index of titles, words, pictures and maps. In addition there are thousands of pictures, hundreds of sounds and animations along with dozens of video clips



How memory stores data in binary format???
Photos, videos, text files, or sound, computers store and process all kinds of information in the form of numbers, or digits. That's why they're sometimes called digital computers. Humans like to work with numbers in the decimal (base 10) system (with ten different digits ranging from 0 through 9). Computers, on the other hand, work using an entirely different number system called binary based on just two numbers, zero (0) and one (1). In the decimal system, the columns of numbers correspond to ones, tens, hundreds, thousands, and so on as you step to the left—but in binary the same columns represent powers of two (two, four, eight, sixteen, thirty two, sixty four, and so on). So the decimal number 55 becomes 110111 in binary, which is 32+16+4+2+1. You need a lot more binary digits (also called bits) to store a number. With eight bits (also called a byte), you can store any decimal number from 0–255 (00000000–11111111 in binary).
One reason people like decimal numbers is because we have 10 fingers. Computers don't have 10 fingers. What they have instead is thousands, millions, or even billions of electronic switches called transistors. Transistors store binary numbers when electric currents passing through them switch them on and off. Switching on a transistor stores a one; switching it off stores a zero. A computer can store decimal numbers in its memory by switching off a whole series of transistors in a binary pattern, rather like someone holding up a series of flags. The number 55 is like holding up five flags and keeping one of them down in this pattern


WHY WE NEED RAM IN CPU???


The device you're using when actively working on your computer is the RAM. The RAM is the only storage that has direct access to the central processing unit (CPU), or the brains of your computer, through what is called a bus. The bus is a pathway or circuit that allows the RAM to communicate directly with the CPU to complete the tasks you want accomplished.
That sounds complicated, but what does it really mean? Let's say you arrive at school carrying your book bag and sit down at your empty desk. The empty desk is the RAM, primary memory. As you get ready to study, you take items out of your book bag and place them on the desk. The desk gives you the space you need to get down to business and use your items, such as pencils, folders and books. Just like RAM, the desk serves as the workspace you need to accomplish tasks.
The down side of RAM is that it is volatile. If something happens - say, your computer suddenly shuts down - anything that is stored there is lost. This is why you are constantly told to save your work often - once you save the work from RAM into something more permanent (like your hard drive or a portable storage device), you don't have to worry about losing your work.



Read Full

Wednesday 20 April 2016

Wifi Issue And Its Solution

 WHAT ARE THE COMMON WIFI  ISSUES AND HOW TO RESOLVE IT


 
Wireless Internet networks afford us the luxury of browsing the Web cable-free, but a connection that relies on radio waves is subject to failure due to interference, signal range limits, hardware problems, and operator error. With that in mind, we've put together a quick guide to the most common Wi-Fi troubles and how to fix them.
If you're struggling with your Wi-Fi network at home or in the office, read on to discover a few different ways to troubleshoot your Wi-Fi woes and restore your wireless network.

7 Common Problems Related To Wifi  

1.Unable to connect Wifi.
2.Wifi is connected but not able to access internet.
3.Signal is week.
4.Wifi range is low.
5.Internet is slow.
6.System hangs when connected to wifi.
7.Signal drop after some time.

SOLUTIONS

Check Your Laptop for a Wi-Fi Button or Switch

Having trouble connecting to Wi-Fi in your favorite coffee shop or airport lounge? The problem might be right under your fingertips. If your laptop or netbook isn't connecting to a local wireless router at all and you can't view a list of nearby wireless networks, check to see if your laptop has a Wi-Fi button or switch that you may have pressed accidently. Many laptops include a function button (labeled with an icon representing a wireless router or network) on the top of the keyboard, or a switch on the front or sides of the laptop. If you find such a button, check to see whether pressing it enables you to get connected.

Reboot Your Computer and Your Wireless Router

If you still can't connect a computer or device, reboot it. This step sounds simple, but your router, your PC's Wi-Fi adapter, or your operating system may have a software or firmware problem that a simple reboot would fix. If some or all of your devices refuse to connect, try unplugging the router for 5 to 10 seconds and then plugging it back in. This technique of "power cycling" your router is a tried-and-true method for restoring a previously functional wireless network to good working order.

Change the Wi-Fi Channel on the Router

Most Wi-Fi routers and devices use the 2.4GHz radio band, which has 11 channels in the United States. Unfortunately, only 3 of the 11 channels can run simultaneously without overlapping or interfering with each other: channels 1, 6, and 11. Worse, many routers are set to broadcast on channel 6 by default. Consequently, interference from other routers in the vicinity is a common source of connectivity problems, especially in densely populated areas such as apartment complexes and shopping centers. Other radios that use the 2.4GHz band--for example, baby monitors and cordless phones--and other electrical devices (such as microwave ovens) can interfere with Wi-Fi signals, too.
To see if other wireless routers might be interfering, take a look at the list of nearby wireless networks. If you're using Windows, click the network icon in the lower right corner. If you see other network names, especially those with more than one bar of signal, they could be interfering with your signal.
You can try to dodge interference by changing your router to another channel. You can blindly choose a channel (going with 1 or 11 is probably your best bet) or you can make a better-educated selection by checking to see which channels nearby networks are using so you can use a different channel. You can check with a free program like InSSIDer or Vistumbler, or use the Web-based Meraki WiFi Stumbler. If you don't have access to one of these applications on your laptop, you can use a free app like Wifi Analyzer (on Android devices) or Wi-Fi Finder (on Apple iOS devices) on your smartphone or tablet to scan for Wi-Fi networks.
Once you've decided on a channel to switch to, you'll need to log in to your router's control panel and change the channel. To access the router's Web-based control panel, open a new window in your browser while you're connected to your router's wireless network and then type in its IP address
If you don't know your router's IP address, refer to the wireless connection details: In the lower-right corner of your Windows desktop, right-click the network icon and open the Network and Sharing Center. Select the wireless network that you wish to view, and click the Details button. You should now see the router's IP address listed as the Default Gateway.
Next, log in to your router control panel with the appropriate username and password. If you don't know the password, you may never have changed it--so try the default password, which you can look up on RouterPasswords.com. If your Internet service provider supplied your router, you may have to call your ISP for help in accessing the password.
After logging in to the router, find the wireless settings and change the channel. Many routers have an automatic channel selection feature; if yours does, you can disable it and manually choose a channel. Again, for maximum performance, try to stick with channel 1 or 11. Once you've saved and applied the settings, your router may reboot; if so, reconnect and then check to see whether your connectivity problem persists. If so, you may need to try another channel.

Check and Reposition the Wireless Router

If the connection difficulty seems to arise only when you're rather far away from your wireless router, the problem could be that you're on the fringe of the router's coverage zone. The simple way to fix this is to buy a router with better range, but you can take some other steps before going to the trouble (and expense) of buying a new router. First, make sure that your router's antennas are securely attached and are positioned upright. Next, confirm that the router isn't buried or blocked behind large objects that might cause the signal to degrade faster than it normally would. For best results, place your router out in the open so the signal can travel freely.
For best results, try moving your wireless router to the center of the room, with a clear line of sight to each of your wireless devices.
If you still aren't getting the Wi-Fi range you'd like, consider moving the router and the modem to a more central location within your desired coverage area. Of course, your placement options are limited: The router must be near another cable or telephone jack. Most cable modems can plug into any cable outlet, and DSL modems usually plug into other telephone jacks--but remember to switch out any filters that might be attached.

Restore the Router's Settings to the Factory Defaults

If you continue to have trouble getting various computers and devices to connect to your router, you can try restoring the router's settings to their factory default values. Unfortunately, this wipes out all of the settings, so you'll have to secure your home or office Wi-Fi again, and you may have to reconfigure your Internet connection settings. When you're ready to restore the router, find the small reset button or hole on the router's back, and use a pen or paperclip to press and hold the button for at least 10 seconds.

Reinstall the Wireless Adapter Driver or Software

If after completely resetting the router, you find that connection problems involving a single PC on your Wi-Fi network still haven't gone away, consider reinstalling the driver and/or software for the Wi-Fi adapter on that PC. The first step in this process is to download the latest network adapter driver or software from your computer manufacturer's website (or from the site of the adapter's manufacturer, if you purchased the adapter separately). From there, carefully follow the manufacturer's directions for reinstalling the software on your adapater. Reboot your PC afterward, and you should be good to go.

Upgrade the Router Firmware

If connection problems survive the reinstallation of your network adapter drivers, ityour router may sufer from a technical issue. Router vendors typically release firmware updates for their routers to fix known issues and sometimes even to add new features.
To see if there's a new firmware release for your router, first log into its web-based control panel (see the section above for help) and check which firmware version you have installed, usually shown on a system or status page. Next, navigate to the website of the router's manufacturer and check the support/downloads section for the newest firmware release for your particular model. Chances are you don't have the latest version; if so, download the latest firmware and follow the instructions on how to update it.
If that doesn't fix your problem, you could also try seeking out open-source router firmware and experimenting with using it to improve your Wi-Fi network. For more details on that process, check out our guide to enhancing your router with open-source software. If you try all these different solutions and your Wi-Fi network is still having problems, it may be time to invest in some new networking hardware (or just head on down to Starbuck's and borrow theirs.)

These are the common Wifi issues and its solutions.



                                                                       Thanks for Reading hope your problem will be solved.......



Read Full