Everything you need to know about a videocard
This Hub is intended to talk about what a videocard is and what its purpose is. Also all technologies that involve a videocard will be discussed. Technologies that concern the hardware, also technologies that involve software will be talked about. This has been a long hobby of mine and I’ve studied this for almost 10 years. I’ve seen the older videocards evolve into bigger, better and much more optimized pieces of hardware. Be prepared to read a technical Hub that will list pretty much all the things you need to know. Afterwards you can start talking about videocards and choice and judge it’s specifications.
What is a Videocard?
The little dots on your monitor are the image you see and they are called pixels. The most usual resolution settings have more then a million pixels. The pc has to decide where all those pixels will come in order to form an image on your screen. To accomplice this a translator is needed to translate binary data from the processor and turn it into the thing you seen on your screen. This translation is done within a graphics card (videocard), if your motherboard doesn’t have a graphics cards capability build in. The build in videocards won’t be in this Hub because I’ll be concentrating on separate videocards.
A videocard’s job is pretty complex but its components and the principles are easy to understand. Just think of a pc as a company with a separate art department. When people want some art, they send there request to this art department. And then the image that is created is decided by the art department and put on paper. The viewable picture is then the end result from some body’s idea.
A videocard has the same principles and works similar like this. The processor works in conjunction with software and sends information to the videocard about how an image looks like. And the videocard makes the decisions how to create an image with the use of the pixels. This information is send through a cable connected to the monitor.
Producing an image made out of binary data is very demanding. The videocard first has to create a wire frame made out of straight lines to form a 3-D image. Then it fills up the remaining pixels (rasterizing the image) and lighting, colour and textures are also added. For very fast paced games this process is done about 60 times each second by the pc. This workload is too much for a pc to handle alone without the use of a graphics card that performs the calculations that are necessary.
Just like a motherboard a videocard is a circuit board that is printed and has a processor and RAM. It also has a BIOS (Basic Input Output System) chip that stores the videocard settings, also the memory is diagnosed and also inputs and outputs at start-up. A videocard also called a graphics card is also called a GPU (Graphics Processing Unit) and it’s very similar to a pc’s processor. But a GPU is designed to perform specifically mathematical and geometric complex calculations. These are necessary for graphics calculation and rendering. The fastest GPU card has more transistors then an average CPU. A lot of heat is produced by a GPU so a heat sink with fan is used to cool things down.
A GPU also applies special programming to analyze and use data apart from the processing power. The 2 biggest manufacturers of GPU’s on the market are ATI and Nvidia and they both have developed enhancements of there own for GPU performance. Also techniques that help the GPU to apply the needed colours, textures, patterns and shading.
On the right side you see a picture of a GPU chip (green GPU chips), clink on it for a bigger picture.
The videocard connects to the motherboard through a port. There are 3 main types of ports PCI (Peripheral Component Interconnect), AGP (Accelerated Graphics Port) and PCIE (Peripheral Component Interconnect Express). 2 biggest reasons to choice PCIE (PCI-Express) is because PCI is an old standard that dates back from the early 90’s. And doesn’t have the needs for the speed and performance for nowadays requirements. Also AGP is less old but has the same position as PCI and therefore manufacturers of chipsets won’t build AGP motherboards anymore because of the much faster PCIE interface.
Max Bandwidth each
PCI: 132 MB/s
AGP 8X: 2,100 MB/s
PCI Express 1x: 250 * MB/s
PCI Express 2x: 500 * MB/s
PCI Express 4x: 1000 * MB/s
PCI Express 8x: 2000 * MB/s
PCI Express 16x: 4000 * MB/s
*Note: Because PCIE is based on a technology that is serial the data is sent over a bus in 2 directions at the same time. In the table you see the bandwidth in one direction on the first number and the other number (second) is the bandwidth combined in both directions. The speeds you see are based on PCIE 1.0 bus generation. If you want the latest version 2.0 you have to multiply by 2 all these bandwidths (only PCI Express). So a PCIE 2.0 16x slot has a bandwidth that has a max of 8000 MB/s 1 way and 16000 MB/s both ways. To read what MB/s mean you can read my other Hub called: Everything you need to know about a Harddrive. Particularly the “Megabyte and Megabit section”.
Now I am going to talk about every port individually.
PCI (Peripheral Component Interconnect)
A PCI slot is a standard for the local system bus that was first introduced by Intel. Because it’s not exclusive to any platform or processor you find these slots in Windows and Mac pc’s. This 32 bit bus has a maximum speed of 33 MHz so it can reach 133 MB max when it passes through data each second.
These slots can have a lot of different kind of expansion cards to expand the functionality of a computer. Expansion cards like: sound card, graphics cards and network cards.
These slots are very common and found on almost any motherboard even if the bus speed is slower than of PCIE slots. If you are not sure about how many PCIE slots you have you can always stick a card that is the PCI version.
AGP (Accelerated Graphics Port)
AGP is a point to point bus which is used as a local bus and operates as a PCI interface, with the addition of 20 signals. Those signals are not present in a PCI bus.
AGP has 32 bits of width and compared to PCI runs at full bus speed. It uses 3.3 or 1.5 volts and runs on 66 MHz with a bandwidth of a minimum of 254.3 MB/s. AGP uses a signalling system so the amount of data that is send over the port at the same clock speed is doubled. Information is send on the rising edge by the bus, this is also known as the “0” to “1” transition signals. And the transition signal “o” which is known as the falling edge of the clock. Transitions are made by the use of both those signals. PCI uses individual transitions at each cycle when it transfers data.
AGP 1x: 1.5/3.3 volts with 266 MB/s of bandwidth
AGP 2x: 1.5/3.3 volts with 533 MB/s of bandwidth
AGP 4x: 1.5 volts with 1066 MB/s of bandwidth
AGP 8x: 0.8 volts with 2.1 GB/s of bandwidth
PCIE (Peripheral Component Interconnect Express)
Like I said this port is the newest ports there is and the fastest. Because of the increasing demand for bandwidth Intel introduced PCIE. PCIE has many advantages to both the manufacturers and users. It’s cheaper to produce than AGP or PCI and implement it into a board and the implementation as an I/O structure that is unifying. This applies to servers, workstations, desktops and even mobiles.
PCIE is a connection that is point to point (serial) which means the bandwidth is not shared, but has a direct communication with a device. Data can flow directly through a switch to the device and it also enables hot plugging and hot swapping and even power is less consumed compared to normal PCI.
But the feature that is mainly important is the greater bandwidth which means it’s better scalable. This is done by the addition of more “lanes” which will be extended in the future for sure.
PCIE comes in a number of flavours: x1, x2, x4, x8 and x16. This number indicates the amount of lanes so x1 has 1 lane, x2 has 2 of them and so on. Because those lanes are bidirectional and that means 2 directions at the same time they have 4 pins. A lane has 250 MB/s transfer rate in every direction so that makes 500 MB/s each lane.
PCIE X1 has 1 lane 4 pins 500 MB/s and is used as an addition device.
PCIE X2 has 2 lanes 8 pins 1000 MB/s (1 GB/s) and is used as an addition device.
PCIE X4 has 4 lanes 16 pins 2000 MB/s (2 GB/s) and is also used as an addition device.
PCIE X8 has 8 lanes 32 pins 4000 MB/s (4 GB/s) and is used for addition device but can also be used for mid-range graphics cards.
PCIE X16 has 16 lanes 64 pins 8000 MB/s (8 GB/s) and is used for graphics cards.
PCIE also has a number of revisions but I am not going into detail in all of them. There are versions:
The last is not yet release but will incorporate bigger bandwidth, better power consumption and less overhead. Also 8b/10b encoding is not required anymore. PCIE 3.0 also has a bit rate of 8 GT/s (gigatransfers per second) compared to PCIE 2.0 5 GT/s.
On the right side you see a couple of videocards bases on the ports just mentioned. Here you see the physical differences between those cards.
PCIE videocards always have a small piece that starts out that’s not joined together directly with the rest of a slot. Also a piece notched at the end that is not joined together directly. The notch fits inside the 1x portion of a slot that’s 16x. This notch makes it much easy to make a difference between PCIE and an AGP GPU. Also a PCIE videocard won’t fit into an AGP slot nor does an AGP videocard fits inside a 16x PCIE card slot.
Also an other difference that is physical is the distance between a card’s bracket and the beginning of the connector for PCIE, AGP and PCI. For PCIE videocards the distance is very little between a metal bracket and the beginning of a connector. Both AGP and PCI have a distance that is much longer.
The difference that is noticeable physically for PCIE 1x and 4x additional devices are also in regard to the distance of the bracket. Both are a lot smaller than the standard PCI, but because those type of cards are pretty rare the change to confuse them with anything else is pretty rare. PCIE 1x connectors do have a resemblance to an AMR slot from many years ago. Today AMR slot are not used anymore and you can’t find any motherboards that both have a AMR slot and PCIE slot. AMR (Audio/Modem Riser) was used as a special riser expansion slot to use for soundcards or modems.
Pixel shaders give each pixel there attributes and its colour. They can always output the same colour or apply lighting values like bump mapping, specular, highlights, shadows, translucency or any other combination. Also the depth of a pixel (Z-buffering) is altered by them or multiple colours when more targets are active. Pixel shaders by itself can’t produce very complex effects, due to the fact it operates only on 1 single pixel. It doesn’t have the knowledge of a complete geometry scene.
Pixel shaders have different versions that are available. I won’t go into details for every version but will name them globally. Pixel Shaders:
Pixel Shader 4.0 is only supported in DirectX 10 and above. And pixel shader 5.0 is only supported in DirectX 11. More on DirectX later.
Vertex shaders run only 1 time for every vertex provided to the GPU. Each vertex 3D position is transformed into virtual space to the 2D coordinates where it has to appear on the screen (including Z-buffer depth value). Things that get manipulated by vertex shaders are the position, texture coordinates and colour properties, vertices however are not created. This output of the vertex shader goes to a next stage like a geometry shader or rasterizer.
Also vertex shaders have different versions available. I’ll name the version but won’t go into details of them. Vertex Shaders:
Version 5.0 has more instruction slots, instruction predication, higher Temp registers, constant registers are much higher, Static Flow control, Dynamic Flow Control, Vertex Texture Fetch, much higher Texture Fetch, much higher texture samplers, etc. I won’t be going into these details because otherwise my Hub will get to long in my opinion.
Geometry shaders is a shader that is a new type and introduced in DirectX 10. This shader generates new graphics primitives like, triangles, points and lines.
After Vertex shaders the geometry shader programs are executed. Adjacency information is used as whole primitive input. For instance triangles when there operated on, the geometry shader input is 3 vertices. Primitives or a zero are emitted by the shader, and these are rasterized and fragments are passed on to a pixel shader.
Point sprites generation, shadow volume extrusion, geometry tessellation and the rendering (single pass) to a cube map are all typically used by geometry shaders. The benefit that is a real world typical example of geometry shaders is “automatic mesh complexity modification”. Control points are represented by a series of line strips for a curve, and these are passed on to the geometry shaders. Also the shader can generate extra lines depending on the required complexity, so that a curve has an approximation which is better.
Unified shaders are new designed shaders that use an “instruction set” consistently across all types of shaders. This is possible due to the fact all shader types have practically the same capabilities, like reading textures, arithmetic instructions set and the use of data buffering. Unfortunately the “instruction set” is not entirely the same if you compared the “shader types”. Texture read can only be performed by the pixel shader, rendering primitives only by Geometry shaders, etc. Shader models 1.x use a different instruction set for vertex/pixel shaders compared to the latest shader models. The later shader models have much more flexible instruction sets. From shader model 2.0 and 3.0 the differences are reduced which come close to the unified shader model.
Unified Shading Architecture
What this all means is older videocards have a fixed set of pixel/vertex/geometry shaders but with the use of the “unified shading architecture” all these shaders can perform all those operations normally performed by a specific shader. The latest games are more and more designed for those unified shaders and makes games look nicer and work efficient. The current ATI Radeon HD6xxx serie has support for Shader model 5.0 which makes full use of the unified shading architecture. Shader model 5.0 is only supported in DirectX 11 and DirectX 11 is only supported in Windows Vista (SP2)/Windows 7. So to make use of this architecture you need Windows 7, hardware that supports it and a game that support is.
A just released game that has all of this is called Batman Arkham City. On the right side you can link to a YouTube link that shows this game in all its glory. Personally it’s one of my favourites off all time! When you look at the video be sure to check 1080P video quality on the right side, so you see this video in a higher resolution. The term resolution will be discussed later on.
Also this game support Nvidia PhysX (CUDA) which is a new high quality API (Application Programming Interface) that provides you with physics 3D movements of objects in games and applications. Normally these objects are rendered by the processor, but to assist the CPU the gpu which is much faster in rendering these objects takes this job. You need at least an Nvidia 8 serie videocard and a game that supports it. Also ATI has something similar under the name ATI Stream and it does the same as the Nvidia product. It takes on the rendering for collisions, cloth, fluids, and regid bodies. So smoke, leaves, bricks that are smashed, fire particles, etc, etc are those things in game that are not produced by your CPU but your GPU.
A large set of elements can be written to at the same time for the purpose of transformation, to every pixel on a specific area of a screen. Also a vertex of a model can be written for transformation. This is specially designed for parallel processing because modern videocards have multiple shader pipelines to support this. The throughput in regards to computation is vastly improved by this new feature.
TMU (Texture Mapping Units)
Filtering and addressing of textures is needed and this is done by TMU’s (Texture Mapping Units). They cooperate together with Pixel shaders and Vertex shader units. Textures operations are applied to pixels and TMU has this job to do so.
ROP (Raster Operator Units)
Raster operator units write pixel data to the memory. This is also known as the “fill rate” which means the speed of the data put into the memory by the ROP. In the early days the ROP’s and fill rate metric where a lot more important for videocards. This isn’t a bottleneck concerning performance like it was at a time, so therefore the ROP’s are not used as a performance indicator for good effects these days.
This is a loosely used term to indicate a videocard architecture, and provide you with a general idea of the power of a GPU. A pipeline isn’t a technical term formally because there are different pipelines inside a GPU. When you look from a historical perspective the pipeline has also been referred as a “pixel processor”, and it’s attacted to one single TMU (Texture Mapping Unit). An 8 year old videocard like the ATI Radeon 9700 has 8 pixel processors, with 1 attached TMU, and therefore this videocard was also considered an 8 pipeline videocard.
Memory bandwidth is also a very important feature. This depends on 2 things namely memory clock rate speed and memory bus width. Memory bus gives an insight into how much data can be theoretically transferred from and to the memory each unit time. To put it an other way, it’s the speed the GPU can read and write data from and to the video memory. You will get higher performance when a videocard reads it geometry data en textures and write computed pixels faster.
There is a calculation to determine your peak memory bandwidth. The real memory clock rate speed and the memory bus width. So let’s say we have a videocard that has a clock rate speed of 2000 MHz effectively so that means it has 1000 MHz real memory clock rate speed (because of the double data rate). And it has a 320-bit memory bus. So that would make this calculation:
1000 MHz x 2 (double data rate) x 40 bytes (converting memory bus into bytes) = 80000 MB/s = 80 GB/s. To find out how to convert bits into bytes read my other Hub: Everything you need to know about a Harddrive.
Sometimes the memory bus isn’t listed on a box or in de specifications. You should pay attention to this detail because it is an important performance indication. Manufactures make videocards that are budget ones and to save money they use a 64 bit bus. The wider the bus the better the performance, so keep this in mind when you buy your new videocard. The least you should have is 256 bit memory bus on your videocard. DDR5 memory doesn’t need a very wide memory bus to give good performance. More on video ram memory later on.
Display resolution is amount of pixels in certain dimensions that are displayed. The resolution is shown as width x height and the unit are in pixels, so “1600x1200” means the width is 1600 and the height is 1200. Display resolutions are also referred to as pixel dimensions because it indicates the pixel amount in each dimension. Also the “pixel density” which is referred to the “pixels per inch” is a digital measurement means the same thing as resolution. On the right side you can click on the picture to view all the available resolutions with there “aspect ratio”.
Display aspect ratio means the fractional relation of the width compared to the height of a display. This aspect ratio is indicated as 2 numbers that are separated by a colon. So you’ll get x:y aspect ratio, and it doesn’t matter how big or tiny your display is. Because when you divide the width into x units of a length that is equal to each other and also the height with the same method then the y units are the height that is measured. The most common ratios are:
The bigger the ratio the bigger your view will be in games and movies. You won’t get those black spaces around your movie or games.
There are also other factors involved like a TFT (Thin Film Transistor) screen that forms the resolution. TFT is a technology that comes from LCD (Liquid Crystal Display) you might have heard of it. It’s a very common technology these days and it has a transistor for each pixel, which means it has tiny elements and they all have a transistor to control a display illumination. This means that the current which triggers the pixel illumination is smaller, and because of that can be switched off or on quicker then LCD. This makes a TFT faster in response (ms) so games and films will not produce ghosting. Ghosting is that the pictures on your screen fade away or fade into each other. This makes watching movies or playing games less interesting.
Video Memory is part of videocards and range from a couple of MB to a max of 8 GB. Now the videocards that had a couple of MB’s are very old (10 years) but the least nowadays you can get is 128 MB. The GPU has to access the memory and therefore it always used high speed special multi ported memory. There are some different kinds of memory and I’ll go over them quickly. Vram (videoram) is the latest most used and fastest dual ported memory. There are also: WRAM, SGRAM and around 2003 DDR were the base of technology for video memory. Then they went to DDR, GDDR3, GDDR4 and at last GDDR5. GDDR stands for Graphics Double Data Rate and the memory clock rate speed range from 400 MHz to around 3.8 GHz.
The memory is used to store data such as screen image, Z-buffers, textures, vertex buffers, and its compiled shader programs. On the right side you see a picture of a videocard with red arrows that point to the memory chips.
RAMDAC (Random Access Memory Digital to Analog Converter)
This is digital to analog converter which converts a digital signal to an analog signal so a display can be used such as a CRT monitor. It’s like a sort of RAM chip which regulate the functioning of a video card. The number of bits and the RAMDAC data transfer rate are the factors that provide different display refresh rates. The higher the refresh rate of a screen the better it is and personally I recommend not going under 85 Hz (hertz) at any resolution.
Current displays have a digital connection so they don’t use the RAMDAC on a videocard. But if you use a VGA (Video Graphics Array) adapter when you use an older monitor the RAMDAC is still needed. The reconversion of the analog signal to a digital one before its displayed will produce loss of quality stemming which is unavoidable. Digital to analog to digital conversion will also have less quality on your screen overall and you can see this with everything (games, movies, general use).
More on VGA adapter later on.
Connections and Cables
On the right side you see a picture with the back of a TFT screen and the rear of a pc.
- Power cord
- D-sub (Analog) VGA adapter
- DVI (Digital Visual Interface)
VGA (Video Graphics Array)
This is a connector with 3 rows of 15 pins and is very common on older videocards. It’s also known as the D-sub because of the metal shield that looks D shaped and it’s very characteristic. This connector carries an analog video signal RGBHV (red, green, blue, horizontal sync, vertical sync) up to a maximum resolution of 2048 x 1536 @ 85 Hz. This connector doesn’t give the best quality results when you use it with a monitor. It has been superseded by DVI (Digital Visual Interface) in 1999.
DVI (Digital Visual Interface)
Is a digital connection which supports “QXGA” resolution and that’s 2048 x 1536 @ 75 Hz. The main version is a single link DVI-D connector. The D in DVI-D means digital because there is also a DVI-I connector which still has an analog signal. When you look on the right picture you see a DVI-I connector above and a DVI-D below. As you can see the above connector has extra shared pins which the analog signal is converted. You can only use a DVI > VGA adapter (second picture right) to convert a digital connector to an analog VGA (D-Sub) on a DVI-I connection (first picture and the above connector). Usually DVI-I connectors are used on videocards so you have the choice to use a DVI > VGA adapter in the case you only have a VGA connection on your monitor. The latest videocards only have DVI and/or or HDMI and/or Display ports on them.
There are also single links and dual link DVI cables. The single link connector has 19 pins comprised of 2 rows of 3 x 3 groups of 9 pins and a flat pin at the side. This cable has a maximum resolution of 1920 x 1200 and a video pixel rate up to 165 MHz. Now there are also DVI dual link cables that have 25 pins although the configuration of the pins is very similar to the single link cable. The big difference is this cable has a maximum pixel rate up to 330 MHz which is twice the bandwidth compared to a single link cable. And therefore the difference with the single link cable resides in the resolution because a dual link cable supports up to 2560 x 1600. On the right side you can see a picture of a single link and and a dual link connector.
HDMI (High Definition Multimedia Interface)
HDMI is a compact audio and video cable introduced in late 2002 (version 1.0) then version 1.1 in 2004, 1.2 in 2005, 1.3 in 2006 and 1.4(a) in 2010. It is used mainly for DVD players, camcorders and video game consoles (PS3/Xbox 360) but can also be used in conjunction with a pc. It sends uncompressed digital data through a single thin cable with a 3D video signal and up to 8 channels uncompressed digital audio. The maximum pixel clock rate is 165 MHz for HDMI 1.0 so it could support a maximum resolution of WUXGA which is 1920 x 1080 (1080P). HDMI 1.3 has a pixel clock rate of 340 MHz so it can allow a resolution of 2560 x 1600 (WQXGA) across a single digital link.
HDMI has also single link and dual links similar like DVI. The single link cables are indicated with A or C and the dual link cables with B and the single link have a maximum video pixel rate of 340 and the dual link has on of 680 MHz. HDMI type A is compatible with DVI-D using a passive adapter or conversion cable but without the special features supported by each format. For instance a HDMI to DVI converter won’t support HDMI digital audio. Also a DVI to HDMI converter doesn’t support analog video. The above picture is a DVI to HDMI adapter and the below picture is a HDMI to DVI adapter.
The HDMI versions are:
Type A: 19 pins, supports all HDTV modes (up to 1080P) compatible with DVD-D single link connector (converter). Specifications are defined in HDMI 1.0 version.
Type B: 29 pins, double the video bandwidth compared to type A can be used for future very high resolutions up to 3840 x 2400 (WQUXGA). This type is compatible with dual link DVI-D connectors but isn’t used yet because of the products that are not yet released. Specifications are defined in HDMI 1.0 version.
Type C: 19 pins, is a mini connector for portable devices so it’s smaller then a type A connector but has the same amount of pins. Type C can be connected to a type A connector with the use of a type A to type C cable. Specifications are defined in HDMI 1.3 version.
Type D: 19 pins, is a micro connector looks a lot like a type A and C connector but is shrunk to micro level. Specifications are defined in HDMI 1.4 version.
Type E: Is used in Automotive Connection Systems (cars, etc) Specifications are defined in HDMI 1.4 version.
This is a new digital display interface standard for use of digital interconnect for audio en video. There are 2 updated revisions namely 1.1 and 1.2. Eventually the displayport will replace the DVI, VGA and maybe even HDMI connectors to link computer displays and TV panels. It has exactly the same functionalities as HDMI but has a greater bandwidth and specifications.
It’s the first display interface that relies on packetized data transmissions similar like other communication protocols such as USB and PCI Express. DVI, HDMI have differential pars which are fixed transmitting RGB pixels and the clock signal but the displayport has a protocol that is based on data packages with clock embedded.
Displayports has support for 1, 2 or 4 differential data pairs (lanes) in the Main Link and each with a raw bit rate of 1.62, 2.7 or 5.4 Gbit/s each lane. Its self clock is running at 162, 270 or 540 MHz. The data is 8b/10b encoded so each 8 bits of data is encoded with a 10 bit symbol.
8b/10b encoding is a “line code” mapping 8 bit symbols to 10 bit symbols to get dc-balance and disparity which is bounded so there’s enough state changes for a clock recovery. I know it sounds very complicated (which it is) but it all means that the difference is not more then 2 bits in a string of a least 20 bits so there aren’t more then 5 x 1s or 0s in a row. This all has the purpose for reduction for the demand of the lower bandwidth limit of a channel which is needed to transfer the signal. If you forgot what 1s and 0s and bits are take a look at my other hub: Everything you need to know about a Harddrive
The display versions are:
1.0/1.1: has a maximum data rate of 8.64 Gbit/s and version 1.1 has support for fiber optic as an alternative link layer so much longer distance between the display and source can be made without degradation of a signal.
1.2: doubles the maximum data rate (bandwidth) to 17.28 Gbit/s which means an increase in resolutions, higher refresh rate and a better richer colour depth. Stereoscopic 3D from Nvidia which is a 3D technology is also support in this version and multiple video streams that are independent (daisy chain connection with more then 1 display). Increase of AUX channel bandwidth from 1 Mbit/s to 720 Mbit/s and a colour space that is more.
If you can choose between all the above connections to your TFT display then choose displayport. It gives the best quality in viewing pleasure and the highest resolution with the least difficult way of connecting it to your monitor, also it has the size of a USB connector!
This is an API (Application Programming Interface) collection together who handle tasks such as games, video, multimedia through the Microsoft platform. Al these API’s start with “Direct”, Direct3D, DirectMusic, DirectDraw, DirectSound, DirectPlay, etc. It’s a short term that is named DirectX for all these API’s and the X stands for those particular API names. More on Direct3D later on.
Direct3D (API in DirectX for 3D graphics use) is the most well know for rendering video games for Windows, Microsoft Xbox(360) but also used by other types of software like CAD/CAM engineering.
There are several versions of DirectX and the most well known version are: DirectX 9, 10 and 11. Only Windows 7 support both DirectX 9/10/11 but Windows XP support only DirectX 9. DirectX 8 was introduced with Windows XP and later updated to DirectX 9.0c. DirectX 10 was introduced with Windows Vista and Windows 7 introduced DirectX 11. And the new DirectX 11.1 will be introduced with the arrival of Windows 8. Keep in mind when you have DirectX 9 the Direct3D component has version 9 so DirectX 10 has Direct3D 10 and 11 has Direct3D 11. The following components are part of any version of DirectX and are responsible for separate jobs:
DirectDraw: 2D graphics drawing (raster graphics) used by some particular video rendering and applications.
Direct3D (D3D): 3D graphics rendering
DXGI: manage swap chains for Direct3D 10 and up for enumerating adapters
Direct2D: 2D graphics
DirectWrite: fonts rendering
DirectCompute: GPU computing
DirectInput: for input devices like mice, joysticks, gamepads, keyboards, etc.
DirectPlay: LAN (local area/wide area network) communication
DirectSound: playback of sounds like 3D sounds or other audio libraries like XAudio2 or XACT3.
DirectMusic: soundtracks are played by this authored in DirectMusic producer. Replaced by XAUDIO2 AND XACT3 since DirectX 8.
DirectX Media: DirectAnimation for 3D and 2D web animation
DirectShow: playback of multimedia en streaming media. With the help of Direct3D it provides high level 3D graphics. It also has plug-ins for audio signal processing and also DirectX video acceleration for enhanced video playback which reduces the use of your processor.
DirectX Diagnose: tool to diagnose and generate a report on all of your DirectX components (video, audio, input). This tool can be accessed by going here: Start ->Run then type “dxdiag”.
DirectX Media Objects: streaming objects like effects, decoders and encoders.
DirectSetup: detects the current version of DirectX and manage the installation of components of DirectX.
DirectX 10 is the successor to DirectX 9 available from Windows Vista with big changes. Some components like DirectInput and DirectSound where change for Xinput and XACT and there was no hardware acceleration anymore for audio because the audio stack of Vista uses the CPU to render sound in software. It also introduced the new “Shader Model 4” I explained early so new features like pixel shaders and vertex shaders are directly connected to and part of DirectX. Also with the introduction of DirectX 10 a new graphics driver architecture was introduced named WDDM (Windows Display Driver Model. It replaced the Windows XP driver display architecture for better graphics performance and new functionalities. It can render a desktop using Desktop Window Manager with the use of a compositing window manager on top with Direct3D. DXGI is a new interface that is also supported to map particular graphics API like Direct3D 10 and the graphics kernel that interfaces the WDDM user mode driver.
WDDM requires a minimum of Direct3D 9 capable videocard and a display driver that implements the device driver interfaces for Direct3D 9Ex (extended). Legacy Direct3D applications need this and the extended version of DirectX 9 (Ex) is part of Windows 7.
DirectX 10.1 is an update that is incremental that came with Windows Vista Service Pack 1, and would give developers control over the image quality more then with 10. Cube map arrays, blend modes per MRT (separate), mask export on a pixel shader, run pixel shader per sample, multiple sample depth buffers access. DirectX requires your videocard to support Shader Model 4.1 at least (or higher) and a 32 bit floating point operation capability. To use DirectX 10.1 you need hardware that supports it because if you have such a videocard it can still run DirectX 10 because it’s backwards compatible.
DirectX 11 is the current latest version that has support for GPGPU (DirectCompute) and Direct3D11 together with support for tessellation. GPGPU is an API (Application Programming Interface) that has support for “general purpose computing on graphics processing units” for Windows Vista/7. In other words your GPU will be used for applications that are traditionally handled by your processor. Tessellation means the process when you create a 2 dimensional plane (geometry surface) using a repetition of geometric shapes without any gaps or overlaps. Additionally multithreading is also improved so developers of games have a better utilization of multi core processors (dual core, quad core, 6 core).
Like I said it’s only supported in Windows Vista/Windows 7 and in future product released of operating systems. Shader Model 5.0 is also introduced in this release and requires DirectX 11 supporting videocards off course. Version 11 is in fact a superset of version 10 so al hardware and features of the API’s are retained of the 10.1 version. This is very handy because new features will only get added when there needed. This is very much needed to keep compatibility for previous versions in tact.
DirectX 11.1 will be included in Windows 8 and will incorporate WDDM 1.2 which will provide an increase in performance, better and tighter Direct2D/3D and DirectCompute. DirectXMath, Xinput and Xaudio2 will also be included as libraries.
Anti Aliasing (AA)
Anti stand for neutralizing or counteracting and aliasing means the jagged stair step effect on curves or diagonal lines. So anti aliasing means it will counteract and also neutralize jagged lines. Its part of the graphics technology of today and it makes games and graphics look better and attractive. I won’t go to details in this Hub because I’ll be devoting an other Hub to this subject. Here you see an example of what anti aliasing can do on the picture right.
SLI (Scalable Link Interface) and Crossfire
SLI is a technology made by a company called 3DFX back in 1998. They where specialist in 3D video cards with back then cutting edge technology. Until 2000 when they got bankrupt and Nvidia merged with them and in 2004 Nvidia introduced SLI again. Then ATI came with it’s version of SLI called Crossfire. SLI and Crossfire are technologies that allow the operation of multiple videocards parallel, so you can process more graphics data in the same time. This will increase the frame rate because more frames are generated each second. Also the image quality can be better but the videocard won’t slow down due to this. Also a higher resolution is possible without losing your frame rate. Frame rate is the speed at which all the pictures are built up on you screen. Below 30 isn’t desirable and the higher the better! The 2 videocards connect with a bridge connector so they can communicate what to render while the other one does the other work.
You need to have the same videocards of Nvidia or ATI but both have to be the same brand. You can’t mix an ATI and Nvidia together. Then in the setup of you video card drivers you need to initiate the SLI or Crossfire.
What is your favorite brand videocard (GPU)?See results without voting
More by this Author
Explanation of how Memory Timings work. What every number means and how to determine if memory is fast or not. Have the ability to choice the right memory for your money.
How to benchmark, test and optimize your videocard. To get the best performance out of your videocard to play games as smooth as possible.
Everything about powersupplies like wattage and rails and what wattage you need for your pc.
No comments yet.