Wednesday, August 8, 2007

GROUP ACTIVITY


Pin Grid Array Processor

PGA the integrated circuit (IC) is mounted in a ceramic slab of which one face is covered, or partially covered, in a square array of metal pins. The pins can then be inserted into the holes in a printed circuit board and soldered in place. They a

re almost always spaced 2.54 mm (a tenth of an inch) apart. For a given number of pins, this type of package oc

cupies less space than older types such as the dual in-line package (DIP).


Example of Pin Grid Array Processor

Motorola 68020

The 68020 (usually just referred to as the '020, pronounced oh-two-oh or oh-twenty) had 32-bit internal and external data and address buses. A lower cost version, the 68EC020, only had a 24-bit address bus. The 68020 was produced at speeds ranging from 12 MHz to 33 MHz.

Improvements over 68010

The 68020 added many improvements to the 68010 including a 32-bit arithmetic logic unit (ALU), external data bus and address bus, and new instructions and addressing modes. The 68020 (and 68030) had a proper three-stage pipeline.

The alignment restriction on word and longword da

ta access present in its predecessors was removed with the 68020.

Usage

The 68020 was used in the Apple Macintosh II and Macintosh LC personal computers, as well as Sun 3 workstations and the Hewlett Packard 8711 Series Network Analyzers. The Commodore Amiga 1200 computer and the Amiga CD32 g

ames console used the cost-reduced 68EC020.

It is also the processor used on board TGV trai

ns to decode signalling information which is sent to the trains through the rails, and is the CPU of the computers in the Eurofighter Typhoon.

For more information on the instructions and architecture see Motorola 68000.



Land Grid Array Processors

The land grid array (LGA) is a type of surface-mount packaging used for integrated circuits. It can be electrically connected to a PCB either by the use of Socket or by soldering directly to the PCB.


Examples of Land Grid Array

Socket F

Socket F, is a CPU socket designed by AMD for its Opteron line of CP

Us. The socket has 1207 pins, and was released on August 15 2006[1].

Socket F is primarily for use in AMD's server line, and will be considered to be in the same socket generation as Socket AM2, which will be used for the At

hlon 64 and Athlon 64 X2; as well as Socket S1, to be used for Turion 64 and Turion 64 X2 microprocessors. Such socket generations are intended for DDR2 support.

Socket F has been rumoured to support Fully Buffered DIMM. Processors planned for Socket F will also likely support DDR3 and other technologies, like XDR-DRAM. But when such RAM is used on an FB-DIMM, no motherboard or CPU c

hange is necessary to support the new RAM, as all FB-DIMMs use

the same DRAM slots regardless of the RAM employed. This overcomes the old drawback of the Hammer architecture, with its integrated memory controller necessitating the replacement of the (potentially very expensive) CPU to support a new memory type. However, AMD has removed FB-DIMM from its roadmap recently


Socket T

Socket T, also known as LGA775, is Intel's latest desktop CPU socket. LGA stands for Land Grid Array. The word "socket" is now a misnomer, because an LGA775 motherboard has no socket holes, instead it has 775 protruding pins which touch contact points on the underside of the processor (CPU).[1]

The Prescott and Cedar Mill Pentium 4 cores, as well as t

he Smithfield and Presler Pentium D cores, currently use the LG

A775 socket type. In July 2006, Intel released the desktop version of the Core 2 Duo (codenamed Conroe), which also uses this socket, as does the subsequent Core 2 Quad. Intel changed from Socket 478 to LGA775 because the new pin type offers better power distribution to the processor, allowing the front side bus to be raised to 1333 MT/s. The 'T' in Socket T was derived from the now cancelled Tejas core, which was to replac

e the Prescott core.

As it is now the motherboard which has the pins, rather than the CPU, the risk of pins being bent is transferred from the CPU to the motherboard. The

risk of bent pins is reduced because the pins are spring-loaded and locate onto a surface, rather than into a hole. Also, the CPU is pressed into place by a "load plate", rather than human fingers directly. The installing technician lifts the hinged "load plate", inserts the processor, closes the load plate over the top of the processor, and pushes down a locking lever. The pressure of the locking lever on the load plate clamps the processor's 775 gold con

tact points firmly down onto the m

otherboard's 775 pins, ensuring a good connection. The load plate only covers the edges of the top surface of the CPU; the center is free to make contact with the cooling mechanism placed on top of the CPU.




CASING SYSTEM



Introduction

Thermaltake has long been known for their ability to cool a computer system and for making quality enclosures, but they have recently been making a name for themselves as a premier power supply manufacturer as well. With the growing needs of m odern enthusiast level rigs, power has become a concern, so the folks at Thermaltake have put together a monster of a PSU for your high power needs.

Introducing the W0133RU Toughpower 1200-watt power supply. While big numbers are impressive, we will take a look at the features included with this behemoth and see if it has not only the power we need in our monster machine, but the feature s to allow us to be the king of the hill.

So sit back and relax for a bit as we delve into this big-boy and see if it can handle the stress and also show us that it is something more than "just another power supply".


COOLING SYSTEM

Computer cooling is the practice of relieving heat, a potentially damaging byproduct of operation, from electronic computers. A computer system unit's many components produce large amounts of heat during operation, including, but not limited to integrated circuits such as CPUs, chipset and graphics cards, along with hard drives. This heat must be dissipated in order to keep these components within their safe operating temperatures, and both manufacturing methods and additional parts are used to keep the heat at a safe level. This is done mainly using heat sinks to increase the surface area which dissipates heat, fans to speed up the exchange of air heated by the computer parts for cooler ambient air, and in some cases softcooling, the throttling of computer parts in order to decrease heat generation.

Overheated parts generally exhibit a shorter maximum life-span and may give sporadic problems resulting in system freezes or crashes.

A stock AMD heatsink mounted on to a motherboard.

A stock AMD heatsink mounted on to a motherboard.


LATEST BUSES

In computing, the electrical pathway through which a computer processor communicates with some of its parts and/or peripherals. Physically, a bus is a set of parallel tracks that can carry digital signals; it may take the form of copper tracks laid down on the computer's printed circuit boards (PCBs), or of an external cable or connection.

A computer typically has three internal buses laid down on its main circuit board: a data bus, which carries data between the components of the computer; an address bus, which selects the route to be followed by any particular data item travelling along the data bus; and a control bus, which is used to decide whether data is written to or read from the data bus. An external expansion bus is used for linking the computer processor to peripheral devices, such as modems and printers



Monday, July 9, 2007

Intel Processor Specs vs AMD Processor Specs

Intel Processors
Technology Used

Intel® Core™ microarchitecture: Higher performance, greater energy efficiency, and more responsive multitasking for enhanced user experiences in all environments.

Intel® Quad-Core technology

Intel® Quad-Core processors deliver four complete execution cores within a single processor, delivering unprecedented performance and responsiveness in multithreaded and multitasking business and home use environments.

Additional transistors deliver advanced capabilities—from dual- and multi-cores and improved cache, to innovative technologies such as virtualization and security.
First delivered in 2005

45nm Hi-k metal gate technology

With more than 400 million transistors for dual-core processors and more than 800 million for quad-core, the 45nm family introduces new microarchitecture features for greater performance and new levels of energy efficiency.
H2 2007
Intel® next generation architecture—"Nehalem"

Nehalem is a truly dynamic and design-scalable microarchitecture that will deliver both performance on demand and optimal price/performance/energy efficiency for each platform.
2008

Intel 64 architecture improves performance by allowing systems to address more than 4 GB of both virtual and physical memory.
First delivered in 2004

New instruction set innovation—SSE4

Streaming SIMD Extensions 4 (SSE4) is Intel's largest ISA extension in terms of scope and impact since SSE2 and offers dozens of new innovative instructions.
H2 2007
Architecture Used
Intel® Core™ Microarchitecture
Intel® Core™ microarchitecture is the foundation for new Intel® architecture-based desktop, mobile, and mainstream server multi-core processors. This state-of-the-art, multi-core optimized microarchitecture delivers a number of new and innovative features that have set new standards for energy-efficient performance.
Intel Core microarchitecture extends the energy-efficient philosophy first delivered in Intel's mobile microarchitecture found in the Intel® Pentium® M processor, and greatly enhances it with many new and leading-edge micro architectural innovations as well as existing Intel NetBurst® microarchitecture features.
Intel Architecture Labs

This article or section is not written in the formal tone expected of an encyclopedia article.Please improve it or discuss changes on the talk page. See Wikipedia's guide to writing better articles for suggestions.
Intel Architecture Labs, also known as IAL, was the
Personal Computer system research and development arm of Intel Corporation during the 1990s. IAL was created by Intel Vice-President Ron Whittier together with Craig Kinnie and Steven McGeady to develop the hardware and software innovations considered to be lacking from PC OEMs and Microsoft in the late 1980s and 1990s.

IAL pursued both hardware and software initiatives, but the latter became de-emphasized after the efforts collided with similar activities by Microsoft. For example, Native Signal Processing (NSP) was a software initiative to allow Intel-based PCs to run time-sensitive code independently of the operating system, allowing real-time audio and video processing on the microprocessors of the mid-1990s. Microsoft refused to support NSP in its operating systems and convinced PC makers that the NSP drivers would render their systems unsupported, and Intel pulled back from promoting the software, leaving NSP as an orphan. IAL also tangled with Microsoft by supporting Netscape and their early browser, and by producing a fast native x86 port of the Java system. Most of these projects were later shelved, and after 1997 IAL tended not to risk competing with Microsoft. The details of IAL's conflicts with Microsoft over software were revealed in Steven McGeady's testimony in the Microsoft anti-trust trial.
Not all of IAL's software efforts met bad ends due to Microsoft -- IAL developed one of the first software digital video systems,
Indeo(tm), technology that was used in its ProShare videoconferencing product line but suffered late from neglect and was sold to another company in the late 1990s.

However, IAL successes in the hardware world are legendary, and include PCI, USB, AGP, the Northbridge/Southbridge core logic architecture and PCI Express (the now-dominant architecture for multi-processor servers).
In 2001, after the departure
AMD Processor Specs
Technology Used
(Advanced Micro Devices) A major manufacturer of semiconductor devices including x86-compatible CPUs, embedded processors, flash memories, programmable logic devices and networking chips. Founded in 1969 by Jerry Sanders and seven friends, AMD's first product was a 4-bit shift register. During the 1970s, it entered the memory business, and after reverse engineering the popular 8080 CPU, the microprocessor market as well.

Throughout the 1980s, AMD was a second-source supplier of Intel x86 CPUs, but in 1991, it introduced the 386-compatible Am386, an AMD-architected chip. With its own chip designs, AMD began to compete directly with Intel. Two years later, the Am486 was introduced, followed throughout the 1990s by the K5, K6 and Athlon families. In 2000, AMD introduced its value line of Duron chips, which were superseded by Sempron in 2004. All AMD-designed chips have been noted for their cool-running, innovative architectures.

In 2003, AMD debuted the Opteron, the first 64-bit x86-compatible CPU on the market. Intended for servers and high-end workstations, the Opterons were followed by 64-bit Athlon models for the desktop. Microsoft announced it would support AMD's 64-bit extensions in Windows XP and Windows Server 2003. Over the years, numerous PC vendors, both small and large, have successfully used millions of AMD's CPU chips in their PCs.
Opteron

The first of AMD's 64-bit CPU chips, formerly code named Sledgehammer (part of the Hammer line). Introduced in April 2003, the Opteron fully supports 32-bit applications, but requires that programs be optimized and recompiled to take full advantage of the 64 bits. The 64-bit version of Windows XP also takes advantage of the increased CPU word size. Intended for servers and high-end workstations, the Opteron competes with Intel's Xeon and Itanium lines. AMD subsequently introduced 64-bit Athlon CPUs
Athlon

A family of Pentium-compatible CPU chips from AMD. The first 32-bit models were introduced as Pentium III-class CPUs in 1999 with a 200MHz system bus and CPU speeds up to 650MHz. Over subsequent years, AMD added numerous models of 32-bit Athlons for desktop, server and mobile use, long since exceeding the initial clock speeds
AMD64
The AMD64 design, known as the Direct Connect Architecture (DCA), includes a set of 64-bit instructions as well as 64-bit registers that allows the CPU to address more than 4GB of memory, which is the limit of 32-bit registers. It also added a DDR memory controller directly on the CPU chip for increased performance.
Architecture Used
AMD started as a producer of logic chips in 1969, then entered the RAM chip business in 1975. That same year, it introduced a reverse-engineered clone of the Intel 8080 microprocessor. During this period, AMD also designed and produced a series of bit-slice processor elements (Am2900, Am29116, Am293xx) which were used in various minicomputer designs.

During this time, AMD attempted to embrace the perceived shift towards RISC with their own AMD 29K processor, and they attempted to diversify into graphics and audio devices as well as EPROM memory. It had some success in the mid-80s with the AMD7910 and AMD7911 "World Chip" FSK modem, one of the first multistandard devices that covered both Bell and CCITT tones at up to 1200 baud half duplex or 300/300 full duplex. While the AMD 29K survived as an embedded processor and AMD spinoff Spansion continues to make industry leading flash memory, AMD was not as successful with its other endeavors. AMD decided to switch gears and concentrate solely on Intel-compatible microprocessors and flash memory. This put them in direct competition with Intel for x86 compatible processors and their flash memory secondary markets
AMD x86 processors


AMD 80286 1982
In February 1982, AMD signed a contract with
Intel, becoming a licensed second-source manufacturer of 8086 and 8088 processors. IBM wanted to use the Intel 8088 in its IBM PC, but IBM's policy at the time was to require at least two sources for its chips. AMD later produced the Am286 under the same arrangement, but Intel canceled the agreement in 1986 and refused to convey technical details of the i386 part.
AMD challenged Intel's decision to cancel the agreement and won in arbitration, but Intel disputed this decision. A long legal dispute followed, ending in 1994 when the
Supreme Court of California sided with AMD. Subsequent legal disputes centered on whether AMD had legal rights to use derivatives of Intel's microcode. In the face of uncertainty, AMD was forced to develop "clean room" versions of Intel code.

In 1991, AMD released the Am386, its clone of the Intel 386 processor. It took less than a year for the company to sell a million units. Later, the Am486 was used by a number of large OEMs, including Compaq, and proved popular. Another Am486-based product, the Am5x86, continued AMD's success as a low-price alternative. However, as product cycles shortened in the PC industry, the process of reverse engineering Intel's products became an ever less viable strategy for AMD

Monday, July 2, 2007

Group Assignment

History of Intel



Around 1978-1979 Intel took the next step up by introducing the 8086 processor. This was one of the earlier 16-bit processors. IBM decided to us it.
Around 1984-1985 Intel introduced the 80286 which added more sophisticated memory management.
In 1987 the 80386 became available. This was a 32 bit word machine with real memory management (paging)
In 1990, the 80486 arrived sporting an integrated floating point unit and higher performance.
The Pentium was introduced around 1992-93.








Intel was founded on July 18, 1968 with one main goal in mind: to make semiconductor memory more practicle. Intels first microprocessor, the 4004 microcomputer, was released at the end of 1971. The chip was smaller then a thumbnail, contained 2300 transistors, and was capable of executing 60,000 operations in one second. Shortly after the release of th 4004 the 8008 microcomputer was released and was capable of executing twice as many operations per second then the 4004. Intels commitment to the microprocessor led to IBM's choice of Intel's 8088 chip for the CPU of the its first PC. In 1982, Intel introduced the first 286 chip, it contained 134,000 transistors and provided around three times the performance of the other microprocessors at the time. In 1989 the 486 processor was released that contained 1.2 million transistors and the first built in math coprocessor. The chip was approximately 50 times faster then Intels original 4004 processor and equaled the performance of a powerful mainframe computer. In 1993 Intel introduced the Pentium processor, which was five times as fast as the 486, it contained 3.1 million transistors, and was capable of 90 million instructions per second (MIPS). In 1995 Intel introduced its new technology, MMX, MMX was designed to enhance the computers multimedia performance. Throughout the years that followed Intel released several lines of processors including the Celeron, the P2, P3, and P4. Intel processors now reach speeds upwards of 2200 MHZ or 2.2 GHZ.









Origin of the name





At its founding, Gordon Moore and Robert Noyce wanted to name their new company "Moore Noyce". This name, however, sounded remarkably similar to "more noise" — an ill-suited name for an electronics company, since noise is typically associated with bad interference. They then used the name NM Electronics for almost a year, before deciding to call their company INTegrated ELectronics or "Intel" for short. However, Intel was already trademarked by a hotel chain, so they had to buy the rights for that name at the beginning











Founders of Intel









Intel was founded in 1968 by Gordon E. Moore (a chemist and physicist) and Robert Noyce (a physicist and co-inventor of the integrated circuit) when they left Fairchild Semiconductor. A number of other Fairchild employees also went on to participate in other Silicon Valley companies. Intel's fourth employee was Andy Grove (a chemical engineer), who ran the company through much of the 1980s and the high-growth 1990s. Grove is now remembered as the company's key business and strategic leader. By the end of the 1990s, Intel was one of the largest and most successful businesses in the world, though fierce competition within the semiconductor industry has since diminished its position.











Intel headquarters in Santa Clara



Moore's Law


Moore's Law is the empirical observation made in 1965 that the number of transistors on an integrated circuit for minimum component cost doubles every 24 months.[1][2] It is attributed to Gordon E. Moore (born 1929),[3] a co-founder of Intel. Although it is sometimes quoted as every 18 months, Intel's official Moore's Law page, as well as an interview with Gordon Moore himself, state that it is every two years.


Earliest forms


The term Moore's Law was coined by Carver Mead around 1970.[4] Moore's original statement can be found in his publication "Cramming more components onto integrated circuits", Electronics Magazine 19 April 1965:


Under the assumption that chip "complexity" is proportional to the number of transistors, regardless of what they do, the law has largely held the test of time to date. However, one could argue that the per-transistor complexity is less in large RAM cache arrays than in execution units. From this perspective, the validity of one formulation of Moore's Law may be more questionable.
Gordon Moore's observation was not named a "law" by Moore himself, but by the
Caltech professor, VLSI pioneer, and entrepreneur Carver Mead.[2] Moore, indicating that it cannot be sustained indefinitely, has since observed "It can't continue forever. The nature of exponentials is that you push them out and eventually disaster happens."[5]
Moore may have heard Douglas Engelbart, a co-inventor of today's mechanical computer mouse, discuss the projected downscaling of integrated circuit size in a 1960 lecture.[6] In 1975, Moore projected a doubling only every two years. He is adamant that he himself never said "every 18 months", but that is how it has been quoted. The SEMATECH roadmap follows a 24 month cycle


Understanding Moore's Law
Moore's law is not about just the density of transistors that can be achieved, but about the density of transistors at which the cost per transistor is the lowest
[1]. As more transistors are made on a chip the cost to make each transistor reduces but the chance that the chip will not work due to a defect rises. If the rising cost of discarded non working chips is balanced against the reducing cost per transistor of larger chips, then as Moore observed in 1965 there is a number of transistors or complexity at which "a minimum cost" is achieved. He further observed that as transistors were made smaller through advances in photolithography this number would increase "a rate of roughly a factor of two per year".[1]


Formulations of Moore's Law

PC hard disk capacity (in GB). The plot is logarithmic, so the fit line corresponds to exponential growth.
The most popular formulation is of the doubling of the number of
transistors on integrated circuits every 18 months. At the end of the 1970s, Moore's Law became known as the limit for the number of transistors on the most complex chips. However, it is also common to cite Moore's Law to refer to the rapidly continuing advance in computing power per unit cost, because increase in transistor count is also a rough measure of computer processing power. On this basis, the power of computers per unit cost - or more colloquially, "bangs per buck" - doubles every 24 months (or, equivalently, increases 32-fold in 10 years).


Amdahl's Law


Amdahl's law, named after computer architect Gene Amdahl, is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors.


Amdahl's Law is a law governing the speedup of using parallel processors on a problem, versus using only one serial processor. Before we examine Amdahl's Law, we should gain a better understanding of what is meant by speedup.
Speedup:
The speed of a program is the time it takes the program to excecute. This could be measured in any increment of time. Speedup is defined as the time it takes a program to execute in serial (with one processor) divided by the time it takes to execute in parallel (with many processors). The formula for speedup is:


T(1)
S=-------------
T(j)



Where T(j) is the time it takes to execute the program when using j processors. Efficiency is the speedup, divided by the number of processors used. This is an important factor to consider. Due to the cost of multiprocessor super computers, a company wants to get the most bang for their dollar.
To explore speedup more, we shall do a bit of analysis. If there are N workers working on a project, we may assume that they would be able to do a job in 1/N time of one worker working alone. Now, if we assume the strictly serial part of the program is performed in B*T(1) time, then the strictly parallel part is performed in ((1-B)*T(1)) / N time. With some substitution and number manipulation, we get the formula for speedup as:



N
S = -----------------------
(B*N)+(1-B)
N = processors
B = % of algorithm that is serial



This formula is known as Amdahl's Law. The following is a quote from Gene Amdahl in 1967:
For over a decade prophets have voiced the contention that the organization of a single computer has reached its limits and that truly significant advances can be made only by interconnection of a multiplicity of computers in such a manner as to permit co-operative solution...The nature of this overhead (in parallelism) appears to be sequential so that it is unlikely to be amenable to parallel processing techniques. Overhead alone would then place an upper limit on throughput of five to seven times the sequential processing rate, even if the housekeeping were done in a separate processor...At any point in time it is difficult to foresee how the previous bottlenecks in a sequential computer will be effectively overcome.


Members:


Kenneth P. Magcalayo


Genesis V. Madriaga


Andie R. Pason


Haris D. Kambang


April John C. Olaveja