HARDWARE



Motherboard
A typical computer is built with the microprocessor, main memory, and other basic components on the motherboard. Other components of the computer such as external storage, control circuits for video display and sound, and peripheral devices are typically attached to the motherboard via ribbon cables, other cables, and power connectors.
Historically, a computer was built in a case or Mainframe with a series of wired together connectors called a backplane into which the cpu, memory and I/O on separate cards was plugged. With the arrival of the microprocessor, it became more cost-effective to place the backplane connectors, processor and glue logic onto a single 'mother' board, and have the video, memory and I/O on 'child' cards - hence the terms 'Motherboard' and Daughterboard.
One of the first popular microcomputers to feature this design was the Apple 2 computer, which had a motherboard and 8 expansion slots.
There is more information about IBM-compatible personal computers in PC motherboard.
Connects to: Devices via Cables
Chips via Sockets
Riser Cards via one of
PCI
AGP
PCI Express

Form Factors: AT
ATX
microATX 
Form factors
Motherboards are available in a variety of form factors, which usually correspond to a variety of case sizes. The following is a summary of some of the more popular PC motherboard sizes available:

PC/XT - the original open motherboard standard created by IBM for the first home computer, the IBM-PC. It created a large number of clone motherboards due to its open standard and therefore became the de facto standard.
AT form factor (Advanced Technology) - the first form factor to gain wide acceptance, successor to PC/XT. Also known as Full AT, it was popular during the 386 era. Now obsolete, it is superseded by ATX.
Baby AT - IBM's successor to the AT motherboard, it was functionally equivalent to the AT but gained popularity due to its significantly smaller physical size. It usually comes without AGP port.
ATX - the evolution of the Baby AT form factor, it is now the most popular form factor available today.
ETX, used in embedded systems and single board computers.
Mini-ATX - essentially the same as the ATX layout, but again, with a smaller footprint.
microATX - again, a miniaturization of the ATX layout. It is commonly used in the larger cube-style cases such as the Antec ARIA.
FlexATX - a subset of microATX allowing more flexible motherboard design, component positioning and shape.
LPX - based on a design by Western Digital, it allows for smaller cases based on the ATX motherboard by arranging the expansion cards in a riser (an expansion card in itself, attaching to the side of the motherboard - image). This design allows the cards to rest parallel to the motherboard as opposed to perpendicular to it. The LPX motherboard is generally only used by large

Mini LPX - a smaller subset of the LPX specification.
NLX - a low-profile motherboard, again incorporating a riser, designed in order to keep up with market trends. NLX never gained much popularity.
BTX (Balanced Technology Extended) - a newer standard proposed by Intel as an eventual successor to ATX.
microBTX and picoBTX - smaller subsets of the BTX standard.
Mini-ITX - VIA's highly integrated small form factor motherboard, designed for uses including thin clients and set-top boxes.
WTX (Workstation Technology Extended) - a large motherboard (more so than ATX) designed for use with high-power workstations (usually featuring multiple processors or hard drives.
While most desktop computers use one of these motherboard form factors, laptop (notebook) computers generally use highly integrated, customized and miniaturized motherboards designed by the manufacturers. This is one of the reasons that notebook computers are difficult to upgrade and expensive to repair - often the failure of one integrated component requires the replacement of the entire motherboard, which is also more expensive than a regular motherboard due to the large number of integrated components in it


What is a Graphics Card?

Graphics Card
The term is usually used to refer to a separate, dedicated expansion card that is plugged into a slot on the computer's motherboard, as opposed to a graphics controller integrated into the motherboard chipset.
HardwareA video card consists of a printed circuit board on which the components are mounted. These include:
Graphics processing unit (GPU)The GPU is a microprocessor dedicated to manipulating and rendering graphics according to the instructions received from the computer's operating system and the software being used. At their simplest level, GPUs include functions for manipulating two-dimensional graphics, such as blitting. Modern and more advanced GPUs also include functions for generating and manipulating three-dimensional graphics elements, rendering objects with shading, lighting, texture mapping and other visual effects.
Video memory
Unlike integrated video controllers, which usually share memory with the rest of the computer, most video cards have their own separate onboard memory, referred to as video RAM (VRAM). VRAM is used to store the display image, as well as textures, buffers (the Z-buffer necessary for rendering 3D graphics, for example) and other elements. VRAM typically runs at higher speeds than desktop RAM. For the most part, current Graphics Cards use GDDR3 or GDD4 whereas desktop RAM is still using DDR2.
Video BIOS
The video BIOS or firmware chip is a chip that contains the basic program that governs the video card's operations and provides the instructions that allow the computer and software to interface with the card.
Connects to:
Motherboard via one of
AGP
PCI Express
PCI
Display via one of
VGA connector
Digital Visual Interface
Composite video
Component Video

Common Manufacturers:
ATI
NVIDIA 


Processors

An electronic device designed to accept data, perform prescribed mathematical and logical operations at high speed, and display the results of these operations.

AMD Opteron Processors
It was released on April 22, 2003 and was intended to compete in the server market, particularly in the same segment as the Intel Xeon processor.
The two key capabilities
Feature-wise, Opteron combines two important capabilities in a single processor die:
native execution of legacy x86 32-bit applications without speed penalties
native execution of x86-64 64-bit applications (linear-addressing beyond 4 GB RAM)
The first capability is notable because at the time of Opteron's introduction, the only other 64-bit processor architecture marketed with 32-bit x86 compatibility (Intel's Itanium) ran x86 legacy-applications only with significant speed degradation. The second capability, by itself, is less noteworthy, as all major RISC players (SPARC, DEC, HP-PA, IBM Power, MIPS, etc.) have had 64-bit implementations for many years. In combining these two capabilities, however, the Opteron has earned recognition for its ability to economically run the vast installed base of x86 applications, while simultaneously offering an upgrade-path to 64-bit computing.
The Opteron processor possesses an integrated DDR SDRAM / DDR2 SDRAM(Socket F) memory controller. This both reduces the latency penalty for accessing the main RAM and eliminates the need for a separate northbridge chip.
Multi-processor featuresIn multi-processor systems (more than one Opteron on a single motherboard), the CPUs communicate using the Direct Connect Architecture over high-speed HyperTransport links. Each CPU can access the main memory of another processor, transparent to the programmer. The Opteron approach to multi-processing is not the same as standard symmetric multiprocessing as instead of having one bank of memory for all CPUs, each CPU has its own memory. The Opteron CPU directly supports up to an 8-way configuration, which can be found in mid-level servers. Enterprise-level servers use additional (and expensive) routing chips to support more than 8 CPUs per box.
In a variety of computing benchmarks, the Opteron architecture has demonstrated better multi-processor scaling than the Intel Xeon[citation needed]. In Xeon systems, the total delivered computing power is often less than the sum of the throughputs of the individual CPUs. For example, a Xeon system may execute two simultaneous tasks each at 90% throughput, or four simultaneous tasks each at 80% throughput[citation needed]. Opteron systems suffer much less drop in aggregate throughput, vindicating AMD's architectural decisions. In particular, the Opteron's integrated memory controller, due to Non-Uniform Memory Access, allows the CPU to access local RAM without using the HyperTransport bus. Even for non-local memory access and interprocessor communication, only the initiator and target are involved, keeping bus-utilization to a minimum. In contrast, multiprocessor Xeon system CPUs share a single common bus for both processor-processor and processor-memory communication. As the number of CPUs increases in a Xeon system, contention for the shared bus causes computing efficiency to drop.
Multi-core Opterons
In May of 2005, AMD introduced its first "Multi-Core" Opteron CPUs. At the present time, the term "Multi-Core" at AMD in practice means "dual-core"; each physical Opteron chip actually contains two separate processor cores. This effectively doubles the compute-power available to each motherboard processor socket. One socket can now deliver the performance of two processors, two sockets can deliver the performance of four processors, and so on. Since motherboard costs go up dramatically as the number of CPU sockets increases, multicore CPUs now allow much higher performing systems to be built with more affordable motherboards.
AMD's model number scheme has changed somewhat in light of its new multicore lineup. At the time of its introduction, AMD's fastest multicore Opteron was the model 875, with two cores running at 2.2 GHz each. AMD's fastest single-core Opteron at this time was the model 252, with one core running at 2.6 GHz. For multithreaded applications, the model 875 would be much faster than the model 252, but for single threaded applications the model 252 would perform faster.
Next-Generation AMD Opteron processors are offered in three series: the 1200 Series (up to 1P/2-core), the 2200 Series (up to 2P/4-core), and the 8200 Series (4P/8-core to 8P/16-core). The 1200 Series is built on AMD's new Socket AM2. The 2200 Series and 8200 Series are built on AMD's new Socket F (1207).
Socket 939
AMD has also released Socket 939 Opterons, reducing the cost of motherboards for low-end servers and workstations. The Socket 939 Opterons are identical to San Diego core Athlon 64s, but are run at lower clockspeeds than the cores are capable of, making them extremely stable. Since this means that they overclock very well, they are in great demand.
Socket AM2
(needs updating-see Socket AM2) Socket AM2 Opterons are available for servers that will only have a single-chip setup. These chips may prove to be as successful as the previous generation socket 939 Opterons due to the Opteron's overclockability. Codenamed Santa Ana, dual core AM2 Opterons feature 2x1mb L2 cache, unlike the majority of their AM2 Athlon 64 X2 cousins which feature 2x512kb L2 cache. Dual core AM2 Opterons, no doubt, face fierce competition from Intel's revamped Xeon processor series.
Socket F (1207)
Socket F is the new socket for the higher-end server-grade Opterons (codename Santa Rosa). Socket F has a 1207 pin layout, as opposed to AM2's 940 pin layout.

AMD Turion 64 Processors
The Turion 64 and Turion 64 X2 processors compete with Intel's mobile processors, initially the Pentium M and currently both of the Intel Core processors.
Turion 64 processors (but not Turion 64 X2 processors) are compatible with AMD's Socket 754 and are equipped with 512 or 1024 KiB of L2 cache, a 64-bit single channel on-die memory controller, and an 800MHz HyperTransport bus. Battery saving features, like PowerNow! (Cool'n'Quiet), are central to the marketing and usefulness of these CPUs.

AMD Athlon 64 Processors
The original Athlon, or Athlon Classic, was the first seventh-generation x86 processor and, in a first, retained the initial performance lead it had over Intel's competing processors for a significant period of time. AMD has continued the Athlon name with the Athlon 64, an eighth-generation processor featuring x86-64 (later renamed AMD64) technology.
Produced: From mid 1999 to 2005
Manufacturer: AMD
CPU Speeds: 500 MHz to 2.33 GHz
FSB Speeds: 100 MHz to 200 MHz
Process: (MOSFET channel length) 0.25 µm to 0.13 µm
Architecture: x86
Sockets:
Slot A
Socket A

Cores: K7 (Argon)
K75 (Pluto/Orion)
Thunderbird
Palomino
Thoroughbred A/B
Barton
Thorton

AMD Sempron Processors
AMD coined the name from the Latin semper, which means "always, everyday", with the purpose of stating that Sempron was the right processor for everyday computing.
The first Sempron CPUs were based on the Athlon XP architecture using the Thoroughbred/Thorton core. These models were equipped with the Socket A interface, 256 KiB L2 cache, and 166 MHz Front side bus (FSB 333). Thoroughbred cores natively had 256KiB of L2 cache, but Thortons had 512KiB of L2 cache, half of which was disabled, and could sometimes be reactivated by bridge modification. Later, AMD introduced the Sempron 3000+ CPU, based on the Barton core (512 KiB L2-cache.) From a hardware and user standpoint, the Socket-A Sempron CPUs were essentially renamed Athlon-XP desktop CPUs. AMD has ceased production of all Socket-A Sempron CPUs.
The second generation (Paris/Palermo core) was based on the architecture of the Socket 754 Athlon 64. Some differences from Athlon 64 processors include a reduced cache size (either 128 or 256 KiB L2), and the absence of AMD64 support in earlier models. Apart from these differences, the Socket 754 Sempron CPUs share most features with the more powerful Athlon 64, including an integrated (on-die) memory controller, the HyperTransport bus, and AMD's "NX bit" feature.
In the second half of 2005, AMD added 64-bit support (AMD64) to the Sempron line. Some journalists (but not AMD) often refer to this revision of chips as "Sempron 64" to distinguish it from the previous revision. AMD's intent in releasing 64-bit entry-level processors was to further the market for 64-bit processors, which, at the time of Sempron 64's first release, was a niche market.
In 2006, AMD announced the Socket AM2 line of Sempron processors. These are functionally equivalent to the previous generation, except they have a dual-channel DDR2 SDRAM memory controller instead of single-channel DDR SDRAM. The TDP of the standard version remains at 62 W (watts), while the new "Energy Efficient Small Form Factor" version has a reduced 35 W TDP. As of 2006, AMD sells both Socket 754 and AM2 Sempron CPUs concurrently.

Intel Xeon Processors
Pentium II Xeon

Pentium II Xeon logoThe first Xeon processor was released in 1998 as the Pentium II Xeon as the replacement of the Pentium Pro. The Pentium II Xeon was based on the P6 microarchitecture and used either a 440GX (a dual-processor workstation chipset) or 450NX (quad-processor, or oct with additional logic) chipset, and differed from the desktop Pentium II in that its off-die L2 cache ran at full speed. It also used a larger slot known as slot 2 Cache sizes were 512 KiB, 1 MiB and 2 MiB, and it used a 100 MT/s bus.
Pentium III Xeon
PIII XeonIn 1999, the Pentium II Xeon was replaced by the Pentium III Xeon. The initial version (Tanner) was no different from its predecessor, save the addition of Streaming SIMD Extensions (SSE) and a few cache controller enhancements found in the Pentium III. The second version (Cascades) was somewhat more controversial, in that while it had a 133 MT/s bus it only had a 256 KiB on-die L2 cache - in other words, there was no difference between it and the desktop Pentium III, the Slot 1 versions of which were also capable of dual-processor operation. In order to remedy the situation somewhat, Intel released a second version (also called Cascades, but often suffixed to "Cascades 2 MB" to differentiate between it and the 256 KiB version) that came in two variants: with 1 MiB or 2 MiB of L2 cache. The bus speed on these models was fixed at 100 MT/s, though in practice the cache was able to offset this.
 Xeon & Xeon MP (32-bit)
The Xeon (dropping "Pentium" from the name) was introduced in mid-2001. The initial variant that used the new NetBurst architecture, Foster, was slightly different from the desktop Pentium 4. It served as a decent workstation chip, but it was almost always outperformed in server applications by the older Cascade 2 MiB core and AMD's Athlon MP. Combined with the need to use expensive Rambus Dynamic RAM, the Foster's sales were somewhat unimpressive.
At most two Foster processors could be accommodated in an SMP system built with a mainstream chipset, so a second version (Foster MP) was introduced with a 1 MiB L3 cache. This improved performance slightly, but not by enough to lift it out of third place. It was also priced much higher than the dual-processor (DP) versions.
In 2002 a 130 nm version of the Xeon (this time codenamed Prestonia) was released, now supporting Intel's new Hyper-Threading technology and having a 512 KiB L2 cache. A new server chipset, E7500 (which allowed the use of dual-channel DDR SDRAM) was released to support this processor in servers, and shortly afterwards the bus speed was boosted to 533 MT/s (accompanied by new chipsets: the E7501 for servers and the E7505 for workstations). The new Xeon performed much better than its predecessor and noticeably better than Athlon MP. The support of new features in the E75xx series also gave it a key advantage over the Pentium III Xeon and Athlon MP (both stuck with rather old chipsets), and it quickly became the top-selling server/workstation processor.
The Xeon MP version of the Prestonia was the Gallatin, which had an L3 cache of 1 MiB or 2 MiB. This version also performed much better than Foster MP, and was popular in servers. Later on, Intel's experience with the 130 nm process allowed them to port the Xeon over to the Gallatin core and also allowed a Xeon MP with 4 MiB cache.
 Xeon & Xeon MP (64-bit)Due to a severe lack of success with Intel's Itanium and Itanium 2 processors, the 90 nm version of the Pentium 4 (Prescott) was built with support for 64-bit instructions (called EM64T by Intel, though it was much the same as AMD's AMD64 instruction set), and a Xeon version codenamed Nocona was released in 2004. Released with it were the E7525 (workstation), E7520 and E7320 (both server) chipsets, which added support for PCI Express, DDR-II and Serial ATA. Generally speaking the Xeon was noticeably slower than AMD's Opteron, though it could also be much faster in situations where Hyper-Threading came into play.
A slightly updated core called Irwindale was released in early 2005, differing from Nocona in having twice the L2 cache and the ability to reduce its clockspeeds in situations that didn't need much processing power. However, performance numbers generated through independent tests (available here) which have been conducted show the Irwindale is still outperformed by the AMD Opteron processor.
64-bit Xeon MPs were introduced in April 2005. The cheaper version was Cranford, an MP version of Nocona. The more expensive version was Potomac; a Cranford with 8 MiB of L3 cache.
 Dual-Core Xeon
 DP-capable, 90 nm "Paxville DP"
Intel released the first Dual-Core Xeon, codenamed Paxville DP, on 10 October 2005. Paxville DP is a dual-core version of the NetBurst Irwindale, with 4 MiB of L2 Cache (2 MiB per core). The one Paxville DP model that has been released runs at 2.8 GHz and features an 800 MT/s front side bus.

 7000-series "Paxville MP"An MP-capable version of Paxville DP, codenamed Paxville MP, was released on 1 November 2005. There are two versions: one with 2 MiB of L2 Cache (1 MiB per core), and one with 4 MiB of L2 (2 MiB per core). Paxville MP is called the Dual-Core Xeon 7000-series. Paxville MP ranges between 2.67 and 3.0 GHz (model numbers 7020-7041), with some models having a 667 MT/s FSB, and others having an 800 MT/s FSB.
 LV, Core Duo-based "Sossaman"
On 14 March 2006, Intel released the processor codenamed Sossaman as the Dual-Core Xeon LV (Low Voltage). Sossaman is a low-power, ultradense environment, dual-processor capable chip based on the Core Duo processor technology. As such, it supports the same feature set as earlier Xeons: Virtualization Technology, 667 MT/s front side bus, and dual-core processing but with 32-bit support only.

 5000-series "Dempsey"
On 23 May 2006, Intel released the Dual-Core Xeon codenamed Dempsey. Released as the Dual-Core Xeon 5000-series, Dempsey is a NetBurst processor built on a 65 nm process, and is virtually identical to Intel's "Presler" Pentium Extreme Edition, except for the addition of SMP support, which lets Dempsey operate in dual-processor systems. Dempsey ranges between 2.67 and 3.73 GHz (model numbers 5030-5080). Some models have a 667 MT/s FSB, and others have a 1066 MT/s FSB. Dempsey has 4 MiB of L2 Cache (2 MiB per core). A Medium Voltage model, at 3.2 GHz and 1066 MT/s FSB (model number 5063), has also been released. Dempsey also introduces a new interface for Xeon processors: Socket J, also known as LGA 771.
 5100-series "Woodcrest"
On 26 June 2006, Intel released the Dual-Core Xeon codenamed Woodcrest; it was the first Intel Core microarchitecture processor to be launched on the market. It is a server and workstation version of the Intel Core 2 processor. Intel claims that it provides an 80% boost in performance, while reducing power consumption by 20% relative to the Pentium D.
It has a 1333 MT/s FSB in most models, except for the 5110 and 5120, which have a 1066 MT/s FSB, with the fastest processor clocking in at 3.0 GHz. All Woodcrests use LGA 771 and all but the 5160 and 5148LV have a TDP of 65 W, which is much less than the previous generation of 130 W. The 5160 has a TDP of 80 W, still much less than 130 W, and the 5148LV, which will be available in Q3 2006, has a TDP of 40 W. All models support EM64T, the XD bit, and Virtualization Technology, with Demand-Based Switching only on Dual-Core Xeon 5140 or above.
 7100-series "Tulsa"
Released on August 29, 2006 [1], the 7100 series, codenamed Tulsa, is an improved version of Paxville MP, built on a 65 nm process, with 2 MiB of L2 cache (1 MiB per core) and up to 16 MiB of L3 cache. It uses Socket 604 [2]. Tulsa was released in two lines: the N-line uses a 667 MT/s FSB, and the M-line uses an 800 MT/s FSB. The N-line ranges from 2.5 to 3.33 GHz (model numbers 7110N-7140N), and the M-line ranges from 2.6 to 3.4 GHz (model numbers 7110M-7140M). L3 cache ranges from 4 MiB to 16 MiB across the models. [3]
 3000-series "Conroe"
Intel released rebadged versions of the desktop Core 2 Duo (Conroe) as the Dual-Core Xeon 3000-series at the end of September 2006. Model numbers are 3040, 3050, 3060, and 3070; other than the name, they are otherwise identical to Core 2 Duo models E6300, E6400, E6600, and E6700 [4]. Unlike all previous Xeon-badged processors, they only support single-CPU operation.
 Quad-Core Xeon
 5300-series "Clovertown"
A quad-core successor of Woodcrest for DP segment, consisting of two Woodcrest dies on a multi-chip module, with 8 MiB of L2 cache (4 MiB per die). Like Woodcrest, lower models use a 1066 MT/s FSB, and higher models use a 1333 MT/s FSB. Intel released Clovertown at November 14, 2006 [5] with models E5310, E5320, E5335, E5345, and X5355, ranging from 1.6 to 2.66 GHz (the Xeon E5335, however, will not be available until Q1'2007). The E and X designations are borrowed from Intel's Core 2 model numbering scheme; an ending of -0 implies a 1066 MT/s FSB, and an ending of -5 implies a 1333 MT/s FSB [6]. All but the X5355 have a TDP of 80 W. The X5355 has a TDP of 120 W. A low-voltage version of Clovertown with a TDP of 50 W has a model number L5310 [7].
 Future versions This article contains information about a scheduled or expected future product.
It may contain unverified or unreliable information, and may not reflect the final version of the product.
 3200-series "Kentsfield"
Intel will release rebadged versions of its upcoming quad-core Core 2 Quad processor as the Xeon 3200-series in early 2007. The models will be the X3210 and X3220, running at 2.13 and 2.4 GHz, respectively [8]. Like the 3000-series, these models will only support single-CPU operation.
 Whitefield
A quad-core processor, partially based on Woodcrest, using the new Common System Interface (CSI) bus, which will be shared with the Itanium 2 processors of its generation (beginning with the "Tukwila" core). Whitefield would have had 16 MiB of L2 cache. and manufactured using the 65 nm process initially, and the 45 nm process later, but it was cancelled from the processor roadmap, and replaced with another processor, codenamed Tigerton. Whitefield was the first full processor being worked on at Whitefield, Bangalore, and hence the name.
 Tigerton
A quad-core, MP-capable processor to be released in place of Whitefield [9] [10].
 Aliceton
A successor to Tigerton [11].
 Dunnington
A 45 nm successor to Tigerton, which may be either a quad-core or an octa-core processor [12] [13]. Dunnington was originally based on Whitefield, but with Whitefield cancelled, Dunnington's details are less clear [14].

 HarpertownHarpertown is said to be a 45 nm, eight-core processor with 12 MiB of L2 cache [15]. An older rumour stated that it was simply the 45 nm shrink of Woodcrest [16], but that has since changed.
 Gainestown
Quad-core processor based on Intel's upcoming Nehalem microarchitecture [17].
 Beckton/Becton
Nehalem-based MP-capable processor (the correct spelling may be either Beckton or Becton). [18]


What is a Sound Card?


Sound card
Typical uses of sound cards include providing the audio component for multimedia applications such as music composition, editing video or audio, presentation/education, and entertainment (games). Many computers have sound capabilities built in, while others require these expansion cards if audio capability is desired.
General characteristics

Close-up of a sound card PCB, showing electrolytic capacitors (most likely for AC coupling), SMT capacitors and resistors, and a YAC512 two-channel 16-bit DAC.A typical sound card includes a sound chip, usually featuring a digital-to-analog converter, that converts recorded or generated digital waveforms of sound into an analog format. This signal is led to a (typically 1/8-inch earphone-type) connector where an amplifier, headphones, or similar sound destination can be plugged in. More advanced designs usually include more than one sound chip to separate duties between digital sound production and synthesized sounds (usually for real-time generation of music and sound effects utilizing little data and CPU time).
Digital sound reproduction is usually achieved by multi-channel DACs, able to play multiple digital samples at different pitches and volumes, optionally applying real-time effects like filtering or distortion. Multi-channel digital sound playback can also be used for music synthesis if used with a digitized instrument bank of some sort, typically a small amount of ROM or Flash memory containing samples corresponding to the standard MIDI instruments. (A contrasting way to synthesize sound on a PC uses "audio codecs", which rely heavily on software for music synthesis, MIDI compliance and even multiple-channel emulation. This approach has become common as manufacturers seek to simplify the design and the cost of the sound card itself).
Most sound cards have a line in connector where the sound signal from a cassette tape recorder or similar sound source can be input. The sound card can digitize this signal and store it (controlled by the corresponding computer software) on the computer's hard disk for editing or further reproduction. Another typical external connector is the microphone connector, for connecting to a microphone or other input device that generates a relatively lower voltage than the line in connector. Input through a microphone jack is typically used by speech recognition software or Voice over IP applications.
Connections
Most sound cards since 1999 conform to Microsoft's PC 99 standard for color coding the external connectors as follows:
Color Function
  Pink Analog microphone input.
  Light blue Analog line level input.
  Lime green Analog line level output for the main stereo signal (front speakers or headphones).
  Black Analog line level output for rear speakers.
  Silver Analog line level output for side speakers.
  Orange S/PDIF digital output (sometimes used as an analog line output for a center speaker instead)
Voices vs channels
Another important characteristic of any sound card is the number of distinct voices (intended as the number of sounds that can be played back simultaneously and independently) and the number of channels (intended as the number of distinct electrical audio outputs).
For example, many older sound chips had three voices, but only one audio channel (mono) where all the voices were mixed into, while the AdLib sound card had 9 voice and 1 mono channel.
For a number of years, most PC sound cards had multiple FM synthesis voices (typically 9 or 18) which were mostly used for MIDI music, but only one (mono) or two(stereo) voice(s) and channel(s) dedicated to playing back digital sound samples, and playing back more than one digital sound sample required performing a software downmix at a fixed sampling rate. Modern low-cost integrated soundcards using an audio codec like the AC'97 still work that way, although they may have more than two sound output channels (surround sound).
Today, a sound card having hardware support for more than the two standard stereo voices, is likely to referred at as "providing hardware audio acceleration".
History of sound cards for the IBM PC architecture

A sound card based on VIA Envy chip
Echo Digital Audio Corporation's Indigo IO — PCMCIA card 24-bit 96 kHz stereo in/out sound cardSound cards for computers based on the IBM PC were uncommon until 1988, leaving the internal PC speaker as the only way early PC software could produce sound and music. The speaker was limited to square wave production, leading to the common nickname of "beeper" and the resulting sound described as "beeps and boops". Several companies, most notably Access Software, developed techniques for digital sound reproduction over the PC speaker; the resulting audio, while functional, suffered from distorted output and low volume, and usually required all other processing to halt while sounds were played. Other home computer models of the 1980s included hardware support for digital sound playback or music synthesis (or both), leaving the IBM PC at a disadvantage when it came to multimedia applications such as music composition or gaming.
It is important to note that the initial design and marketing focuses of sound cards for the IBM PC platform were not based on gaming, but rather on specific audio applications such as music composition (AdLib Personal Music System, Creative Music System, IBM Music Feature Card) or on speech synthesis (Digispeech DS201, Covox Speech Thing, Street Electronics Echo). It took the involvement of Sierra and other game companies in 1988 to switch the focus toward gaming.
Hardware manufacturers
One of the first manufacturers of sound cards for the IBM PC was AdLib, who produced a card based on the Yamaha YM3812 sound chip, aka the OPL2. The AdLib had two modes: A 9-voice mode where each voice could be fully programmed, and a lesser-used "percussion" mode that used 3 regular voices to produce 5 independent percussion-only voices for a total of 11. (The percussion mode was considered inflexible by most developers, so it was used mostly by AdLib's own composition software.)
Creative Labs also marketed a sound card at the same time called the Creative Music System. Although the C/MS had twelve voices to AdLib's nine, and was a stereo card while the AdLib was mono, the basic technology behind it was based on the Philips SAA 1099 which was essentially a square-wave generator. Sounding not unlike twelve simultaneous PC speakers, it never caught on the way the AdLib did, even after Creative marketed it a year later through Radio Shack as the Game Blaster. The Game Blaster retailed for under $100 and included the hit game title Silpheed.
Probably the most significant historical change in the history of sound cards came when Creative Labs produced the Sound Blaster card. The Sound Blaster cloned the AdLib, and also added a sound coprocessor to record and play back digital audio (presumably an Intel microcontroller, which Creative incorrectly called a "DSP" to suggest it was a digital signal processor), a game port for adding a joystick, and the ability to interface to MIDI equipment (using the game port and a special cable). With more features at nearly the same price point, and compatibility with existing AdLib titles, most first-time buyers chose the Sound Blaster. The Sound Blaster eventually outsold the AdLib and set the stage for dominating the market.
The Sound Blaster line of cards, in tandem with the first cheap CD-ROM drives and evolving video technology, ushered in a new era of multimedia computer applications that could play back CD audio, add recorded dialogue to computer games, or even reproduce motion video (albeit at much lower resolutions and quality). The widespread adoption of Sound Blaster support in multimedia and entertainment titles meant that future sound cards such as Media Vision's Pro Audio Spectrum and the Gravis Ultrasound needed to address Sound Blaster compatibility if they were to compete against it.
Industry adoption
When game company Sierra On-Line opted to support add-on music hardware (instead of built-in hardware such as the PC speaker and built-in sound capabilities of the IBM PCjr and Tandy 1000), the concept of what sound and music could be on the IBM PC changed dramatically. Two of the companies Sierra partnered with were Roland and Adlib, opting to produce in-game music for King's Quest 4 that supported the Roland MT-32 and Adlib Music Synthesizer. The MT-32 had superior output quality, due in part to its method of sound synthesis as well as built-in reverb. Being the most sophisticated synthesizer they supported, Sierra chose to use most of the MT-32's custom features and unconventional instrument patches to produce background sound effects (birds chirping, horses clopping, etc.) before the Sound Blaster brought playing real audio clips to the PC entertainment world. Many game companies would write for the MT-32, but support the Adlib as an alternative due to the latter's higher market base. The adoption of the MT-32 led the way for the creation of the MPU-401/Roland Sound Canvas and General MIDI standards as the most common means of playing in-game music until the mid-1990s.
Feature evolution
Most ISA bus soundcards could not record and play digitized sound simultaneously, mostly due to inferior card DSPs. Later PCI bus cards fixed these limitations and are mostly full-duplex.
For years, soundcards had only one or two channels of digital sound (most notably the Sound Blaster series and their compatibles) with the notable exception of the Gravis Ultrasound family, which had hardware support for up to 32 independent channels of digital audio. Early games and MOD-players needing more channels than the card could support had to resort to mixing multiple channels in software. Today, most good quality sound cards have hardware support for at least 16 channels of digital audio, but others, like those that utilize cheap audio codecs, still rely partially or completely on software to mix channels, through either device drivers or the operating system itself to perform a software downmix of multiple audio channels.
Sound devices other than expansion cards
Integrated sound on the PC
In 1984, the IBM PCjr debuted with a rudimentary 3-voice sound synthesis chip, the SN76489, capable of generating three square-wave tones with variable amplitude, and a pseudo white noise channel that could generate primitive percussion sounds. The Tandy 1000, initially being a clone of the PCjr, duplicated this functionality, with the Tandy TL/SL/RL line adding digital sound recording/playback capabilities.
In the late 1990s, many computer manufacturers began to replace plug-in soundcards with a "codec" (actually a combined audio AD/DA-converter) integrated into the motherboard. Many of these used Intel's AC97 specification. Others used cheap ACR slots.
As of 2005, these "codecs" usually lack the hardware for direct music synthesis or even multi-channel sound, with special drivers and software making up for these lacks, at the expense of CPU speed (for example, MIDI reproduction takes away 10-15% CPU time on an Athlon XP 1600+ CPU).
Nevertheless, some manufacturers offered (and offer, as of 2006) motherboards with integrated "real" (non-codec) soundcards usually in the form of a custom chipset providing e.g. full ISA or PCI Soundblaster compatibility, thus saving an expansion slot while providing the user with a (relatively) high quality soundcard.
Integrated sound on other platforms
Various computers which do not use the IBM PC architecture, such as Apple's Macintosh, and workstations from manufacturers like Sun have had their own motherboard integrated sound devices. In some cases these provide very advanced capabilities (for the time of manufacture), in most they are minimal systems. Some of these platforms have also had sound cards designed for their bus architectures which of course cannot be used in a standard PC.
USB sound cards
While not literally sound cards (since they don't plug into slots inside of a computer, and usually are not card-shaped (rectangular)), there are devices called USB sound cards. These attach to a computer via USB cables. The USB specification defines a standard interface, the USB audio device class, allowing a single driver to work with the various USB sound devices on the market.
Other outboard sound devices
USB Sound Cards are far from the first external devices allowing a computer to record or synthesize sound. Virtually any method that was once common for getting an electrical signal in or out of a computer has probably been used to attempt to produce sound.
Driver architecture
To use a sound card, the operating system typically requires a specific device driver. Some operating systems include the drivers for some or all cards available, in other cases the drivers are supplied with the card itself, or are available for download.
DOS programs for the IBM PC often had to use universal middleware driver libraries (such as the HMI Sound Operating System, the Miles Sound System etc.) which had drivers for most common sound cards, since DOS itself had no real concept of a sound card. Some card manufacturers provided (sometimes inefficient) middleware TSR-based drivers for their products, and some programs simply had drivers incorporated into the program itself for the sound cards that were supported.
Microsoft Windows uses proprietary drivers generally written by the sound card manufacturers. Many makers supply the drivers to Microsoft for inclusion on Windows distributions. Sometimes drivers are also supplied by the individual vendors for download and installation. Bug fixes and other improvements are likely to be available faster via downloading, since Windows CDs cannot be updated as frequently as a web or FTP site. Vista will use UAA.
A number of versions of UNIX make use of the portable Open Sound System. Drivers are seldom produced by the card manufacturer.
Most Linux-based distributions make use of the Advanced Linux Sound Architecture, but have taken measures to remain compatible with the Open Sound System.




 Windows XP Troubleshooting


Permissions while Copying and Moving Files and Folders
In Microsoft Windows 2000, in Microsoft Windows Server 2003, and in Microsoft Windows XP, you have the option of using either the FAT32 file system or the NTFS file system. When you use NTFS, you can grant permissions to your folders and files.

Details: http://support.microsoft.com/kb/310316/en-us?spid=1173&sid=73
Install or Modify a Local Printer
This article describes the user rights that a user must have to install or to modify a local printer on a Microsoft Windows XP-based or Microsoft Windows 2000-based computer. To install or to modify a local printer, either of the following.

Details: http://support.microsoft.com/kb/297780/en-us?spid=1173&sid=73
Audit User Access of Files, Folders and Printers
As an administrator of a Windows XP Professional-based computer, you can configure your computer to audit user access to files, folders and printers. This facility is unavailable on Windows XP Home Edition. Auditing User Access of Files, Folders.

Details: http://support.microsoft.com/kb/310399/en-us?spid=1173&sid=73
  
Configure a Connection to the Internet
Describes how to use the Network Connections tool to configure Internet connections in Windows XP Professional.

Details: http://support.microsoft.com/kb/305549/en-us?spid=1173&sid=73
Enable Administrator to Log On Automatically in Recovery Console
307654How To Install and Use the Recovery Console for Windows XP307654 How To Install and Use the Recovery Console for Windows XP 308402'The password is not valid' error message appears when you log on to Recovery Console in Windows XP308402.

Details: http://support.microsoft.com/kb/312149/en-us?spid=1173&sid=73
What is a Computer display?

Computer display
A cable connects the monitor to a video adapter (video card) that is installed in an expansion slot on the computer’s motherboard. This system converts signals into text and pictures and displays them on a TV-like screen (the monitor).
The computer sends a signal to the video adapter, telling it what character, image, or graphic to display. The video adapter converts that signal to a set of instructions that tell the display device (monitor) how to draw the image on the screen.
It is important that the monitor have a TCO Certification.
 Cathode ray tube
The CRT, or cathode ray tube, is the picture tube of your monitor. Although it is a large vacuum tube, it is shaped more like a bottle. The tube tapers near the back where there is a negatively charged cathode, or electron gun. The electron gun shoots electrons at the back of the positively charged screen, which is coated with a phosphorous chemical. This excites the phosphors causing them to glow as individual dots called pixels (picture elements). The image you see on the monitor's screen is made up of thousands of tiny dots (pixels). If you have ever seen a child's LiteBrite toy, then you have a good idea of the concept. The distance between the pixels has a lot to do with the quality of the image. If the distance between pixels on a monitor screen is too great, the picture will appear fuzzy, or grainy. The closer together the pixels are, the sharper the image on screen. The distance between pixels on a computer monitor screen is called its dot pitch and is measured in millimeters. (See sidebar.) Most modern monitors have a monitor with a dot pitch of .28 mm or less.
Note: From an environmental point of view, the monitor is the most difficult computer peripheral to dispose of because of the lead it contains.
There are two electromagnets (yokes) around the collar of the tube, which bend the beam of electrons. The beam scans (is bent) across the monitor from left to right and top to bottom to create, or draw the image, line by line. The number of times in one second that the electron gun redraws the entire image is called the refresh rate and is measured in Hertz (Hz). If the scanning beam hits each line of pixels, in succession, on each pass, then the monitor is known as a non-interlaced monitor. The electron beam on an interlaced monitor scans the odd numbered lines on one pass, and then scans the even lines on the second pass. Interlaced Monitors are typically harder to look at, and have been attributed to eyestrain and nausea.
Imaging technologies

19" inch (48 cm) CRT computer monitorAs with television, several different hardware technologies exist for displaying computer-generated output:
Liquid crystal display (LCD). (LCD-based monitors can receive television and computer protocols (SVGA, DVI, PAL, SECAM, NTSC). As of this writing (June 2006), LCD displays are the most popular display device for new computers in North America.
Cathode ray tube (CRT)
Vector displays, as used on the Vectrex, many scientific and radar applications, and several early arcade machines (notably Asteroids (game) - always implemented using CRT displays due to requirement for a deflection system, though can be emulated on any raster-based display.
Television receivers were used by most early personal and home computers, connecting composite video to the television set using a modulator. Image quality was reduced by the additional steps of composite video → modulator → TV tuner → composite video, though it reduced costs of adoption because one did not have to buy a specialized monitor.
Plasma display
Surface-conduction electron-emitter display (SED)
Video projector - implemented using LCD, CRT, or other technologies. Recent consumer-level video projectors are almost exclusively LCD based.
Organic light-emitting diode (OLED) display
During the era of early home computers, television sets were almost exclusively CRT-based.
Performance measurements
The relevant performance measurements of a monitor are:
Luminance
Size
Dot pitch. In general, the lower the dot pitch (e.g. 0.24), the sharper the picture will rate.
V-sync rate
Response time
Refresh rate
Display resolutions
A modern CRT display has considerable flexibility: it can usually handle a range of resolutions from 320 by 200 up to 2560 by 2040 pixels.
Issues and problems
Screen burn-in has been an issue for a long time with CRT computer monitors and televisions. Commonly, people use screensavers in order to prevent their computer monitors from getting screen burn-in. How this happens is that if an image is displayed on the screen for a long period without changing, the screen that is showing will embed itself into the glass. Generally, you will find this phenomenon at older ATM machines. In order to prevent screen burn-in on computer monitors, it is recommended that you use a good screensaver program that rotates often.
The other issue with computer monitors is that some LCD monitors may get dead pixels over time. This generally applies to older LCD monitors from the 1990s.
Things on both issues have changed over time and are improving in order to prevent these things from happening.
With exceptions of DLP, most display technologies (especially LCD) have an inherent misregistration of the color planes, that is, the centres of the red, green, and blue dots do not line up perfectly. Subpixel rendering depends on this misalignment; technologies making use of this include the Apple II from 1976 [1], and more recently Microsoft (ClearType, 1998) and XFree86 (X Rendering Extension).
Display interfaces
Computer Terminals
Early CRT-based VDUs (Visual Display Units) such as the DEC VT05 without graphics capabilities gained the label glass teletypes, because of the functional similarity to their electromechanical predecessors.
Composite monitors
Early home computers such as the Apple II and the Commodore 64 used composite monitors. However, they are now used with video game consoles.
Digital monitorsEarly digital monitors are sometimes known as TTLs because the voltages on the red, green, and blue inputs are compatible with TTL logic chips. Later digital monitors support LVDS, or TMDS protocols.
TTL monitors

IBM PC with green monochrome displayMonitors used with the MDA, Hercules, CGA, and EGA graphics adapters used in early IBM Personal Computers and clones were controlled via TTL logic. Such monitors can usually be identified by a male DB-9 connector used on the video cable. The primary disadvantage of TTL monitors was the extremely limited number of colors available due to the low number of digital bits used for video signaling.
TTL Monochrome monitors only made use of five out of the nine pins. One pin was used as a ground, and two pins were used for horizontal/vertical synchronization. The electron gun was controlled by two separate digital signals, a video bit, and an intensity bit to control the brightness of the drawn pixels. Only four unique shades were possible; black, dim, medium or bright.
CGA monitors used four digital signals to control the three electron guns used in color CRTs, in a signalling method known as RGBI, or Red Green and Blue, plus Intensity. Each of the three RGB colors can be switched on or off independently. The intensity bit increases the brightness of all guns that are switched on, or if no colors are switched on the intensity bit will switch on all guns at a very low brightness to produce a dark grey. A CGA monitor is only capable of rendering 16 unique colors. The CGA monitor was not exclusively used by PC based hardware. The Commodore 128 could also utilize CGA monitors. Many CGA monitors were capable of displaying composite video via a separate jack.
EGA monitors used six digital signals to control the three electron guns in a signalling method known as RrGgBb. Unlike CGA, each gun is allocated its own intensity bit. This allowed each of the three primary colors to have four different states (off, soft, medium, and bright) resulting in 64 possible colors.
Although not supported in the original IBM specification, many vendors of clone graphics adapters have implemented backwards monitor compatibility and auto detection. For example, EGA cards produced by Paradise could operate as a MDA, or CGA adapter if a monochrome or CGA monitor was used place of an EGA monitor. Many CGA cards were also capable of operating as MDA or Hercules card if a monochrome monitor was used.
Modern technology
Analog RGB monitors
Most modern computer displays can show thousands or millions of different colors in the RGB color space by varying red, green, and blue signals in continuously variable intensities.
Digital and analog combinationMany monitors have analog signal relay, but some more recent models (mostly LCD screens) support digital input signals. It is a common misconception that all computer monitors are digital. For several years, televisions, composite monitors, and computer displays have been significantly different. However, as TVs have become more versatile, the distinction has blurred.
Configuration and usage
Multi-head
Some users use more than one monitor. The displays can operate in multiple modes. One of the most common spreads the entire desktop over all of the monitors, which thus act as one big desktop. The X Window System refers to this as Xinerama.
A monitor may also clone another monitor.
Dualhead - Using two monitors
Triplehead - using three monitors
Display assembly - multi-head configurations actively managed as a single unit
Virtual displays
The X Window System provides configuration mechanisms for using a single hardware monitor for rendering multiple virtual displays, as controlled (for example) with the Unix DISPLAY global variable or with the -display command option.
Major manufacturers
Apple Computer
BenQ
Dell, Inc.
Eizo
Iiyama Corporation
LaCie
LG Electronics
NEC Display Solutions
Philips
Samsung
Sony
ViewSonic
 
 
Mouse
In addition, it usually features buttons and/or other devices, such as "wheels", which allow the user to perform various system-dependent operations. Extra buttons or features can add more control or dimensional input.
The mouse's 2D motion typically translates into the motion of a pointer on a display.
The name "mouse", coined at the Stanford Research Institute, derives from the resemblance of early models (which had a cord attached to the rear part of the device, suggesting the idea of a tail) to the common small rodent of the same name.[1]
Because the computer mouse has long dominated the world of pointing devices in computing, people often refer to any generic computer pointing-device as a mouse.
 Mice
 Early mice

The first computer mouse, held by inventor Douglas Engelbart, showing the wheels that make contact with the working surface.Douglas Engelbart of Stanford Research Institute invented the mouse in 1963 after extensive usability testing. Engelbart's team called it a "bug" — one of several experimental pointing-devices developed for Engelbart's oN-Line System (NLS). The other devices were designed to exploit other body movements — for example, head-mounted devices attached to the chin or nose — but ultimately the mouse won out because of its simplicity and convenience.
The first mouse, a bulky device (pictured) used two gear-wheels perpendicular to each other: the rotation of each wheel translated into motion along one axis. Engelbart received patent US3541541 on November 17, 1970 for an "X-Y Position Indicator for a Display System". At the time, Engelbart envisaged that users would hold the mouse continuously in one hand and type on a five-key chord keyset with the other.
 Mechanical mice
Early mouse patents. From left to right: Opposing track wheels by Engelbart, Nov. '70, 3541541. Ball and wheel by Rider, Sept. '74, 3835464. Ball and two rollers with spring by Opocensky, Oct. '76, 3987685.
The optical sensor from a Microsoft Wireless IntelliMouse Explorer (v. 1.0A).Bill English invented the so-called ball mouse in the early 1970s while working for Xerox PARC. The ball-mouse replaced the external wheels with a single ball that could rotate in any direction. Perpendicular wheels housed inside the mouse's body detected in their turn the motion of the ball. This variant of the mouse resembled an inverted trackball and was the predominant form used with personal computers throughout the 1980s and 1990s. The Xerox PARC group also settled on the modern technique of using both hands to type on a full-size keyboard and grabbing the mouse as needed.
Modern computer mice took form at the École polytechnique fédérale de Lausanne (EPFL) under the inspiration of Professor Jean-Daniel Nicoud and at the hands of engineer and watchmaker André Guignard. A spin-off of EPFL, Logitech, launched the first popular breed of mice.
The major movement-translation techniques employed in computer mice involve optical, mechanical and inertial sensors.
Honeywell produced another short-lived type of mechanical mouse. Instead of a ball, it had two plastic "feet" on the bottom which sensed movement. ([1])
 Optical mice
An optical mouse uses a light-emitting diode and photodiodes to detect movement relative to the underlying surface, rather than moving some of its parts — as in a mechanical mouse.
Early optical mice, circa 1980, came in two different varieties:
Some, such as those invented by Steve Kirsch of Mouse Systems Corporation, used an infrared LED and a four-quadrant infrared sensor to detect grid lines printed on a special metallic surface with infrared absorbing ink. Predictive algorithms in the CPU of the mouse calculated the speed and direction over the grid.
Others, invented by Richard F. Lyon and sold by Xerox, used a 16-pixel visible-light image sensor with integrated motion detection on the same chip ([2]) and tracked the motion of light dots in a dark field of a printed paper or similar mouse pad ([3]).
These two mouse types had very different behaviors, as the Kirsch mouse used an x-y coordinate system embedded in the pad, and would not work correctly when rotated, while the Lyon mouse used the x-y coordinate system of the mouse body, as mechanical mice do.
As computing power grew cheaper, it became possible to embed more powerful special-purpose image-processing chips in the mouse itself. This advance enabled the mouse to detect relative motion on a wide variety of surfaces, translating the movement of the mouse into the movement of the pointer and eliminating the need for a special mouse-pad. This advance paved the way for widespread adoption of optical mice.
Modern surface-independent optical mice work by using an optoelectronic sensor to take successive pictures of the surface on which the mouse operates. Most of these mice use LEDs to illuminate the surface that is being tracked; LED optical mice are often mislabeled as "laser mice". Changes between one frame and the next are processed by the image processing part of the chip and translated into movement on the two axes using an optical flow estimation algorithm. For example, the Agilent Technologies ADNS-2610 optical mouse sensor processes 1512 frames per second: each frame is a rectangular array of 18×18 pixels, and each pixel can sense 64 different levels of gray.
Optomechanical mice detect movements of the ball optically, giving the precision of optical without the surface compatibility problems, whereas optical mice detect movement relative to the surface by examining the light reflected off it.
 Laser mice

Two wireless computer mice with scroll wheels.As early as 1998, Sun Microsystems provided a laser mouse with their Sun SPARC Station servers and workstations.
In 2004 Logitech, along with Agilent Technologies, introduced the laser mouse with its MX 1000 model. This mouse uses a small infrared laser instead of an LED, which according to the companies can increase the resolution of the image taken by the mouse, leading to around 20× more sensitivity to the surface features used for navigation compared to conventional optical mice, via interference effects.
Gamers have complained that the MX 1000 does not respond immediately to movement after it is picked up, moved, and then put down on the mouse pad. Newer revisions of the mouse do not suffer from this problem, which results from a power-saving feature (almost all optical mice, laser or LED based, also implement this power-saving feature, except those intended for use in gaming, where a millisecond of delay becomes significant). Engineers designed the laser mouse — as a wireless mouse — to save as much power as possible. In order to do this, the mouse blinks the laser when in standby mode (8 seconds after the last motion). This function also increases the laser life.
 Optical versus mechanical mice

Operating a mechanical mouse.1: moving the mouse turns the ball.
2: X and Y rollers grip the ball and transfer movement.
3: Optical encoding disks include light holes.
4: Infrared LEDs shine through the disks.
5: Sensors gather light pulses to convert to X and Y velocities.
The Logitech iFeel optical mouse uses a red LED to project light onto the tracking surface.Optical mice supporters claim that optical rendering works better than mechanical mice, that it requires no maintenance and that optical mice last longer due to having no moving parts. Optical mice do not normally require any maintenance other than removing debris that might collect under the light emitter, although cleaning a dirty mechanical mouse is fairly straightforward too.
Supporters of mechanical mice point out that optical mice generally cannot track on glossy and transparent surfaces, including many commercial mouse-pads, causing them to periodically "spin" uncontrollably during operation. Mice with less image-processing power also have problems tracking extremely fast movement, though high-end mice can track at 1 m/s (40 inches per second) and faster.
As of 2006, mechanical mice have lower average power demands than their optical counterparts. This typically has no practical impact for users of cabled mice (except possibly those used with battery-powered computers, such as notebook models), but has an impact on battery-powered wireless models. A typical mechanical model requires 25 mA at +5 V (= 0.125 W), or less, whereas an optical model draws 100 mA at +5 V (= 0.5 W) (for a 4∶1 ratio).
Since optical mice render movement based on an image which the LED illuminates, use with multi-colored mousepads may result in unreliable performance. However, optical models will outperform mechanical mice on uneven, slick, squishy, sticky or loose surfaces, and generally in mobile situations lacking mouse pads. The advent of affordable high-speed, low-resolution cameras and the integrated logic in optical mice provides an ideal laboratory for experimentation on next-generation input-devices. Experimenters can obtain low-cost components simply by taking apart a working mouse and changing the optics or by writing new software.
 Inertial mice
Inertial mice detect movement, through a gyroscope, for every axis supported. Usually cordless, they often have a switch to deactivate the movement circuitry between use, allowing the user freedom of movement without affecting the pointer position.
 Buttons
In contrast to the motion-sensing mechanism, the mouse's buttons have changed little over the years, varying mostly in shape, number, and placement. Engelbart's very first mouse had a single button; Xerox PARC soon designed a three-button model, but reduced the count to two for Xerox products. Apple reduced it back to one button with the Macintosh in 1984, while Unix workstations from Sun and others used three buttons. Commercial mice usually have between one and three buttons, although in the late 1990s some mice had five or more.
The two-button mouse has become the most commonly available design. As of 2006 (and roughly since the mid-1990s), users most commonly employ the second button to invoke a contextual menu in the computer's software user interface, which contains options specifically tailored to the interface element over which the mouse pointer currently sits. By default, the primary mouse button is located on the left hand side of the mouse, for the benefit of right-handed users.
On systems with three-button mice, pressing the center button (a middle click) often conveniently maps a commonly-used action or a macro. Many two-button mice are configured to emulate a three-button mouse by clicking both the right and left buttons simultaneously. Middle-clicks are often used as a spare button in case a function is not allocated easily.
 Additional buttons
Manufacturers have built mice with five or more buttons. Depending on the user's preferences, the extra buttons may allow forward and backward web navigation, scrolling through a browser's history, or other functions. As with similar features in keyboards, however, these functions may not be supported by all software. The additional buttons are generally more useful in computer games, where quick and easy access to a wide variety of functions (for example, weapon-switching in first-person shooters) can be very beneficial. Because mouse buttons can be mapped to virtually any function, keystroke, application or switch, they can make working with such a mouse more efficient and easier to use.
In the matter of the number of buttons, Douglas Engelbart favored the view "as many as possible". The prototype that popularised the idea of three buttons as standard had that number only because "we couldn't find anywhere to fit any more switches".
 Wheels
The scroll wheel, a notably different form of mouse-button, consists of a small wheel that the user can rotate to provide immediate one-dimensional input. Usually, this input translates into "scrolling" up or down within the active window or GUI-element . This is especially helpful in navigating a long document. The scroll wheel can often be pressed too, thus being in fact a third (center) button. Under many Windows applications, the wheel pressure activates autoscrolling and in conjunction with the control key (Ctrl) may zoom in and out (applications which support this feature include Adobe Reader, Microsoft Word, Internet Explorer, Opera and Mozilla Firefox). Scroll wheels may be referred to by different names by various manufacturers for branding purposes; Genius, for example, usually brand their scroll wheel-equipped products "Netscroll".
Genius introduced the scroll wheel commercially in 1995, marketing it as the Mouse Systems ProAgio and Genius EasyScroll. Microsoft released the Microsoft IntelliMouse in 1996, and it became a commercial success in 1997 when the Microsoft Office application suite and the Internet Explorer browser started supporting its wheel-scrolling feature. Since then the scroll wheel has become a norm in some circles.
Some newer mouse models have two wheels, separately assigned to horizontal and vertical scrolling. Designs exist which make use of a "rocker" button instead of a wheel — a pivoting button that a user can press at the top or bottom, simulating "up" and "down" respectively.
A more recent form of mouse wheel, the tilt-wheel, features in some of the higher-end Logitech and Microsoft mice. Tilt wheels are essentially conventional mouse wheels that have been modified with a pair of sensors articulated to the tilting mechanism. These sensors are mapped, by default, to horizontal scrolling.
A third variety of built-in scrolling device, the scroll ball, essentially consists of a trackball embedded in the upper surface of the mouse. The user can scroll in all possible directions in very much the same way as with the actual mouse, and in some mice, can use it as a trackball. Mice featuring a scroll ball include Apple's Mighty Mouse and the IOGEAR 4D Web Cruiser Optical Scroll Ball Mouse.
 3D mice
In the late 1990s Kantek introduced the 3D RingMouse. This wireless mouse was worn on a ring around a finger, which enabled the thumb to access three buttons. The mouse was tracked in three dimensions by a base station. Despite a certain appeal, it was finally discontinued because it did not provide sufficient resolution.
 Connectivity and communication protocols
A very small USB mouse, designed for portability.To transmit their input, typical cabled mice use a thin electrical cord terminating in a standard connector, such as RS-232C, PS/2, ADB or USB. Cordless mice instead transmit data via infrared radiation (see IrDA) or radio (including Bluetooth).
The electrical interface and the format of the data transmitted by commonly available mice has in the past varied between different manufacturers.
 PS/2 interface and protocol
For more details on this topic, see PS/2 connector.
With the arrival of the IBM PS/2 personal-computer series in 1987, IBM introduced the eponymous PS/2 interface for mice and keyboards, which other manufacturers rapidly adopted. The most visible change was the use of a round 6-pin mini-DIN, in lieu of the former 5-pin connector. In default mode (called stream mode) a PS/2 mouse communicates motion, and the state of each button, by means of 3-byte packets.
 IntelliMouse and others
A Microsoft IntelliMouse relies on an extension of the PS/2 protocol: the ImPS/2 or IMPS/2 protocol (the abbreviation combines the concepts of "IntelliMouse" and "PS/2"). It initially operates in standard PS/2 format, for backwards compatibility. After the host sends a special command sequence, it switches to an extended format in which a fourth byte carries information about wheel movements. The IntelliMouse Explorer works analogously, with the difference that its 4-byte packets also allow for two additional buttons (for a total of five).
The Typhoon mouse uses 6-byte packets which can appear as a sequence of two standard 3-byte packets, such that ordinary PS/2 driver can handle them.
Mouse-vendors also use other extended formats, often without providing public documentation.
For 3D or 6DOF input, vendors have made many extensions both to the hardware and to software. In the late 90's Logitech created ultrasound based tracking which gave 3D input to a few millimeters accuracy, which worked well as an input device but failed as a money making product.
Other input devices, such as PhaseSpace's 3D optical tracking, have been used to create VR and AR input devices, and are being experimented on at Cambridge, Cardiff, UCSC and other universities as well as government labs. This type of input device tracks multiple LED sources to provide sub millimeter tracking for input to control tools and robots in a real or virtual space as well as computer training.
 Apple Desktop Bus

Apple ADB mouse.In 1986 Apple first implemented the Apple Desktop Bus allowing the daisy-chaining together of up to 16 devices, including arbitrarily many mice. Featuring only a single data pin, the bus used a purely polled approach to computer/mouse communications and survived as the standard on mainstream models until 1998 when the iMac began a switch to using USB. The PowerBook G4 retained the Apple Desktop Bus for communication with its built-in keyboard and trackpad until early 2005.
 Common button uses
Single-click
Select
Right-select
Double-click
Cut
Paste
Triple-click
 Tactile mice
In 2000, Logitech introduced the "tactile mouse", which contained a small actuator that made the mouse vibrate. Such a mouse can augment user-interfaces with haptic feedback, such as giving feedback when crossing a window boundary. But optical mice cannot use this feature, making widespread adoption unlikely.
Other unusual variants have included a mouse that a user holds freely in the hand, rather than on a flat surface, and that detects six dimensions of motion (the three spatial dimensions, plus rotation on three axes). Its vendor marketed it for business presentations in which the speaker stands or walks around. So far, these mice have not achieved widespread popularity.
 Mouse speed
The computer industry often measures mouse sensitivity in terms of DPI (dots per inch), the number of pixels the mouse cursor will move when the mouse is moved one inch. However, software tricks like changeable mouse sensitivity can be used to make a cursor move faster or slower than its DPI, and the use of cursor acceleration can make the cursor accelerate when the mouse moves at a constant speed. This makes "DPI" confusing, and Apple and several other vendors have suggested adopting a replacement metric: "CPI" (counts per inch).[citation needed]
A less common unit, the "Mickey", takes its name from Mickey Mouse. Not a traditional unit of measurement, it indicates merely the number of "dots" reported in a particular direction. Only when combined with the DPI of the mouse does the Mickey become an indication of actual distance moved. In the absence of acceleration, the Mickey corresponds to the number of pixels moved on the computer screen.
Additionally, operating systems traditionally apply acceleration, referred to as ballistics, to the motion reported by the mouse. For example, versions of Windows prior to Windows XP doubled reported values above a configurable threshold, and then optionally doubled them again above a second configurable threshold. These doublings applied separately in the X and Y directions, resulting in very nonlinear response. Windows XP and many OS versions for Apple Macintosh computers use a smoother ballistics calculation that compensates for screen-resolution and has better linearity.
 "Mice" and "mouses"
The fourth (current as of 2006) edition of The American Heritage Dictionary of the English Language endorses both computer mice and computer mouses as correct plural forms for computer mouse. The form mice, however, appears most commonly, while some authors of technical documents may prefer the form mouse devices. The plural mouses treats mouse as a "headless noun", as discussed in the English plural article.
 Accessories
 Mousepad
The mousepad, the most popular mouse accessory, appears most commonly in conjunction with mechanical mice, because in order to roll smoothly, the ball requires more friction than common desk surfaces usually provide. Special "hard mousepads" for gamers also exist.
Optical and laser mice do not require a pad, and using pads with such models remains mostly a matter of personal taste. One exception occurs when the desk surface creates problems for the optical or laser tracking. Other cases may involve keeping desk or table surfaces from scratches and deterioration; when the grain pattern on the surface causes inaccurate tracking of the pointer, or when the mouse-user desires a more comfortable mousing surface to work on and reduced collection of debris under the mouse.
 Foot covers
Mouse foot covers (or foot pads) consists of low-friction or polished plastic. This makes the mouse glide with less resistance over a surface. Some higher quality models have teflon feet to further decrease friction.
 Cord managers
Accessories for managing the cord of a mouse come in different forms. They aim to help manage excess cord length, avoiding interference with normal operation.
 Wrist-restsCushioning pillows made from silicone gel, neoprene or other spongy material have also become popular accessories. The padding provides for a more natural angle of the wrist, in order to reduce fatigue and avoid excessive strain. However, some people believe that wrist rests relieve strain only because they change your mousing posture, and that they do not necessarily correct anything.
 Mice in the marketplace
Around 1981, Xerox included mice with its Xerox Star, based on the mouse that had been used in the 1970s on the Alto computer at Xerox PARC. Sun Microsystems, Symbolics, Lisp Machines Inc., and Tektronix also shipped workstations with mice, starting in about 1981. Later, inspired by the Star, Apple Computer released the Apple Lisa, which also used a mouse. However, none of these products achieved large-scale success. Only with the release of the Apple Macintosh in 1984 did the mouse see widespread use.
The Macintosh design, successful and influential, led many other vendors to begin producing mice or including them with their other computer products. The widespread adoption of graphical user interfaces in the software of the 1980s and 1990s made mice all but indispensable for controlling computers. As of 2000, Dataquest estimated annual world-wide sales of mice worth US$1.5 billion.[citation needed]
 Alternative devicesTrackball – the user rolls a ball mounted in a fixed base.
Touchpad – detects finger movement about a sensitive surface — the norm for modern laptop computers. At least one physical button normally comes with the touchpad, but users can also (configurably) generate a click by tapping on the pad. Advanced features include detection of finger pressure, and scrolling by moving one's finger along an edge.
Pointing stick – a pressure sensitive nub used like a joystick on laptops, usually found between the g, h, and b keys on the keyboard
Consumer touchscreen devices exist that resemble monitor shields. Framed around the monitor, they use software-calibration to match screen and cursor positions.
Mini-mouse – a small egg-sized mouse for use with laptop computers — usually small enough for use on a free area of the laptop body itself.
Camera mouse – a camera tracks the head movement and moves the onscreen cursor. Natural pointers track the dot on a person's head and move the cursor accordingly.
Palm mouse – held in the palm and operated with only 2 buttons; the movements across the screen correspond to a feather touch, and pressure increases the speed of movement.
Foot mouse – a mouse variant for those who do not wish to or cannot use the hands or the head; instead, it provides footclicks.
Tablet – pen-like in form, but used as a mouse. It is held like a normal pen and is moved across a special pad. The thumb usually controls the clicking on a two-way button on the top of the mouse.
Eyeball controlled – A mouse controlled by the user's eyeball/retina movements, allowing the cursor to be manipulated without touch.
Finger-mouse – An extremely small mouse controlled by two fingers only; it can be held in any position.
 Applications of mice in user interfaces
Usually, computer users utilize a mouse to control the motion of a cursor in two dimensions in a graphical user interface. Clicking or hovering can select files, programs or actions from a list of names, or (in graphical interfaces) through pictures called "icons" and other elements. For example, a text file might be represented by a picture of a paper notebook, and clicking while the pointer hovers this icon might cause a text editing program to open the file in a window. (See also point-and-click)
Users can also employ mice gesturally; that is, a stylized motion of the mouse cursor itself, called gesture, can be used as a form of command and mapped to a specific action. For example, in a drawing program, moving the mouse in a rapid "x" motion over a shape might delete the shape.
Gestural interfaces occur more rarely than plain pointing and clicking; and people often find them more difficult to use, because they require finer motor control from the user. However, a few gestural conventions have become widespread, including the drag-and-drop gesture, in which:
The user presses the mouse button while the mouse cursor hovers over an interface object
The user moves the cursor to a different location while holding the button down
The user releases the mouse button
For example, a user might drag and drop a picture representing a file onto a picture of a trash-can, indicating that the file should be deleted.
Other uses of the mouse's input occur commonly in special application-domains. In interactive three-dimensional graphics, the mouse's motion often translates directly into changes in the virtual camera's orientation. For example, in the first-person shooter genre of games (see below), players usually employ the mouse to control the direction in which the virtual player's "head" faces: moving the mouse up will cause the player to look up, revealing the view above the player's head.
When mice have more than one button, software may assign different functions to each button. Often, the primary (leftmost in a right-handed configuration) button on the mouse will select items, and the secondary (rightmost in a right-handed) button will bring up a menu of alternative actions applicable to that item. For example, on platforms with more than one button, the Mozilla web browser will follow a link in response to a primary button click, will bring up a contextual menu of alternative actions for that link in response to a secondary-button click, and will often open the link in a new tab or window in response to a click with the tertiary (middle) mouse button.
 One, two or three buttons?

Three-button mouseThe issue of whether a mouse "should" have exactly one button or more than one has attracted a surprising amount of controversy. From the first Macintosh until late 2005, Apple shipped computers with a single-button mouse, whereas most other platforms used a multi-button mouse. Apple and its advocates promoted single-button mice as more efficient, and portrayed multi-button mice as confusing for novice users. The Macintosh user interface is designed so that all functions are available with a single button mouse. Apple's Human Interface Guidelines still specify that all functions need to be available with a single button mouse. However, X Window System applications, which Mac OS X can also run, were designed with the use of two or even three button mice in mind, causing even simple operations like "cut and paste" to become awkward. Mac OS X natively supports multi-button mice, so some users of older Macs choose to use third-party mice on their machines. On August 2, 2005, Apple introduced their Mighty Mouse multi-button mouse, which has four independently-programmable buttons and a "scroll ball" which allows the user to scroll in any direction. Since the mouse uses touch-sensitive technology (rather than having visible divisions into separate buttons), users can treat it as a one-, two-, three-, or four-button mouse, as desired.
Advocates of multiple-button mice argue that support for a single-button mouse often leads to clumsy workarounds in interfaces where a given object may have more than one appropriate action. Several common workarounds exist, and even some widely-used Macintosh applications that otherwise conform to the Apple Human Interface Guidelines occasionally require the use of one of them.
One such workaround involves the press-and-hold technique. In a press-and-hold, the user presses and holds the single button. After a certain period, software perceives the button press not as a single click but as a separate action. This has two drawbacks: first, a slow user may press-and-hold inadvertently. Second, the user must wait while the software detects that the click is actually a press-and-hold, otherwise their press might be interpreted as a single click. Furthermore, the remedies for these two drawbacks conflict with each other: the longer the lag time, the more the user must wait; and the shorter the lag time, the more likely it is that some user will accidentally press-and-hold when meaning to click.
Alternatively, the user need to hold down a key on the keyboard while pressing the button (Macintosh computers use the ctrl key). This has the disadvantage that it requires that both the user's hands be engaged. It also requires that the user perform two actions on completely separate devices in concert; that is, pressing a key on the keyboard while pressing a button on the mouse. This can be a very daunting task for a disabled user. Studies have found all of the above workarounds less usable than additional mouse buttons for experienced users.
Most machines running Unix or a Unix-like operating system run the X Window System which almost always encourages a three-button mouse. X numbers the buttons by convention. This allows user instructions to apply to mice or pointing devices that do not use conventional button placement. For example, a left handed user may reverse the buttons, usually with a software setting. With non-conventional button placement, user directions that say "left mouse button" or "right mouse button" are confusing. The ground-breaking Xerox Parc Alto and Dorado computers from the mid-1970s used three-button mice, and each button was assigned a color. Red was used for the left (or primary) button, yellow for the middle (secondary), and blue for the right (meta or tertiary). This naming convention lives on in some SmallTalk environments, such as Squeak, and can be less confusing than the right, middle and left designations.
 Mice in gaming
Mice often function as an interface for PC-based computer games and sometimes for video game consoles. They are often used in combination with the keyboard. In arguments over which is the best gaming platform, the mouse is often cited as a possible advantage for the PC, depending on the gamer's personal preferences.
 First-person shooters

Logitech G5 Laser Mouse designed for gaming.A combination of mouse and keyboard provides a popular way to play first-person shooter (FPS) games. Players use the X-axis of the mouse for looking (or turning) left and right, leaving the Y-axis for looking up and down. The left mouse button is usually for primary fire. Many gamers prefer this over a gamepad or joystick because it allows them to turn quickly and have greater accuracy. If the game supports multiple fire-modes, the right button often provides secondary fire from the selected gun. In games supporting grenades it can serve to throw grenades. In Call of Duty 2, it allows users to look down the barrel of the gun (for better aiming).
Gamers can use a scroll wheel for changing weapons, or for controlling scope-zoom magnification. On most FPS games, programming may also assign these functions to thumb-buttons. A keyboard usually controls movement (for example, WASD, for moving forward, left, backward and right, respectively) and other functions like changing posture. Since the mouse serves for aiming, a mouse that tracks movement accurately and with less lag will give a player an advantage over players with less accurate or slower mice.
An early technique of players, circle-strafing, saw a player continuously strafing while aiming and shooting at an opponent by walking in circle around the opponent with the opponent at the center of the circle. Players could achieve this by holding down a key for strafing while continuously aiming the mouse towards the opponent.
Games using mouses for input have such a degree of popularity that many manufacturers, such as Belkin, Logitech, Cyber Snipa[4] and Razer USA Ltd, make premium peripherals such as mice and keyboards specifically for gaming. Such devices frequently feature (in the case of mice) adjustable weights, high-resolution optical or laser components, additional buttons, ergonomic shape, and other features such as adjustable DPI.
 Invert mouse setting
Many games, such as first- or third-person shooters, have a setting named "invert mouse" or similar (not to be confused with "button inversion", sometimes performed by left-handed users) which allows the user to look downward by moving the mouse forward and upward by moving the mouse backward (the opposite of non-inverted movement). This control system resembles that of aircraft control sticks, where pulling back causes pitch up and pushing forward causes pitch down; computer joysticks also typically emulate this control-configuration.
After id Software's Doom, the game that popularized FPS games but which did not support vertical aiming with a mouse (the y-axis served for forward/backward movement), competitor 3D Realms' Duke Nukem 3D became one of the first games that supported using the mouse to aim up and down. It and other games using the Build engine had an option to invert the Y-axis. The "invert" feature actually made the mouse behave in a manner that users now regard as non-inverted. Soon after, id Software released Quake which introduced the invert feature as we now know it. Other games using the Quake engine have come on the market keeping to this standard, likely due to the overall popularity of Quake.
 Super NintendoIn the early 1990s the Super Nintendo Entertainment System video game system became the first commercial gaming-console to feature a mouse in addition to its controllers. The game Mario Paint in particular used the mouse's capabilities.
 




.