Guest User

nvidia-173.14.39pkg_info

a guest
Jun 13th, 2015
891
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
C 309.81 KB | None | 0 0
  1. Added support for X.org xserver ABI 15 (xorg-server 1.15).
  2. Updated nvidia-installer to consider the "libglamoregl.so" X loadable extension module to be in conflict with the NVIDIA OpenGL driver.
  3. This module can cause the NVIDIA libGL to be loaded into the same process (the X server) as the NVIDIA libglx.so extension module, which is not a supported use case.
  4.  
  5.  
  6.  
  7.  
  8. NVIDIA Accelerated Linux Graphics Driver README and Installation Guide
  9.  
  10.     NVIDIA Corporation
  11.     Last Updated: 2010/11/22
  12.     Most Recent Driver Version: 173.14.39
  13.  
  14. Published by
  15. NVIDIA Corporation
  16. 2701 San Tomas Expressway
  17. Santa Clara, CA
  18. 95050
  19.  
  20.  
  21. NOTICE:
  22.  
  23. ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS,
  24. DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, "MATERIALS")
  25. ARE BEING PROVIDED "AS IS." NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED,
  26. STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS
  27. ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A
  28. PARTICULAR PURPOSE. Information furnished is believed to be accurate and
  29. reliable. However, NVIDIA Corporation assumes no responsibility for the
  30. consequences of use of such information or for any infringement of patents or
  31. other rights of third parties that may result from its use. No license is
  32. granted by implication or otherwise under any patent or patent rights of
  33. NVIDIA Corporation. Specifications mentioned in this publication are subject
  34. to change without notice. This publication supersedes and replaces all
  35. information previously supplied. NVIDIA Corporation products are not
  36. authorized for use as critical components in life support devices or systems
  37. without express written approval of NVIDIA Corporation.
  38.  
  39. NVIDIA, the NVIDIA logo, NVIDIA nForce, GeForce, NVIDIA Quadro, Vanta, TNT2,
  40. TNT, RIVA, RIVA TNT, Quincunx Antialiasing, and TwinView are registered
  41. trademarks or trademarks of NVIDIA Corporation in the United States and/or
  42. other countries.
  43.  
  44. Linux is a registered trademark of Linus Torvalds. Fedora and RedHat are
  45. trademarks of Red Hat, Inc. SuSE is a registered trademark of SuSE AG.
  46. Mandrake is a registered trademark of Mandrakesoft SA. Intel and Pentium are
  47. registered trademarks of Intel Corporation. Athlon is a registered trademark
  48. of Advanced Micro Devices. OpenGL is a registered trademark of Silicon
  49. Graphics Inc. PCI Express is a registered trademarks and/or service marks of
  50. PCI-SIG. Windows is a registered trademark of Microsoft Corporation in the
  51. United States and other countries. Other company and product names may be
  52. trademarks or registered trademarks of the respective owners with which they
  53. are associated.
  54.  
  55.  
  56. Copyright 2006 NVIDIA Corporation. All rights reserved.
  57.  
  58. ______________________________________________________________________________
  59.  
  60. TABLE OF CONTENTS
  61. ______________________________________________________________________________
  62.  
  63. Chapter 1. Introduction
  64. Chapter 2. Minimum Software Requirements
  65. Chapter 3. Selecting and Downloading the NVIDIA Packages for Your System
  66. Chapter 4. Installing the NVIDIA Driver
  67. Chapter 5. Listing of Installed Components
  68. Chapter 6. Configuring X for the NVIDIA Driver
  69. Chapter 7. Frequently Asked Questions
  70. Chapter 8. Common Problems
  71. Chapter 9. Known Issues
  72. Chapter 10. Allocating DMA Buffers on 64-bit Platforms
  73. Chapter 11. Specifying OpenGL Environment Variable Settings
  74. Chapter 12. Configuring AGP
  75. Chapter 13. Configuring TwinView
  76. Chapter 14. Configuring GLX in Xinerama
  77. Chapter 15. Configuring Multiple X Screens on One Card
  78. Chapter 16. Configuring TV-Out
  79. Chapter 17. Using the XRandR Extension
  80. Chapter 18. Configuring a Notebook
  81. Chapter 19. Programming Modes
  82. Chapter 20. Configuring Flipping and UBB
  83. Chapter 21. Using the Proc Filesystem Interface
  84. Chapter 22. Configuring Power Management Support
  85. Chapter 23. Using the X Composite Extension
  86. Chapter 24. Using the nvidia-settings Utility
  87. Chapter 25. Configuring SLI and Multi-GPU FrameRendering
  88. Chapter 26. Configuring Frame Lock and Genlock
  89. Chapter 27. Configuring SDI Video Output
  90. Chapter 28. Configuring Depth 30 Displays
  91. Chapter 29. NVIDIA Contact Info and Additional Resources
  92. Chapter 30. Acknowledgements
  93.  
  94. Appendix A. Supported NVIDIA GPU Products
  95. Appendix B. X Config Options
  96. Appendix C. Display Device Names
  97. Appendix D. GLX Support
  98. Appendix E. Dots Per Inch
  99. Appendix F. i2c Bus Support
  100. Appendix G. XvMC Support
  101. Appendix H. Tips for New Linux Users
  102.  
  103. ______________________________________________________________________________
  104.  
  105. Chapter 1. Introduction
  106. ______________________________________________________________________________
  107.  
  108.  
  109. 1A. ABOUT THE NVIDIA ACCELERATED LINUX GRAPHICS DRIVER
  110.  
  111. The NVIDIA Accelerated Linux Graphics Driver brings accelerated 2D
  112. functionality and high-performance OpenGL support to Linux x86 with the use of
  113. NVIDIA graphics processing units (GPUs).
  114.  
  115. These drivers provide optimized hardware acceleration for OpenGL and X
  116. applications and support nearly all recent NVIDIA GPU products (see Appendix A
  117. for a complete list of supported GPUs). TwinView, TV-Out and flat panel
  118. displays are also supported.
  119.  
  120.  
  121. 1B. ABOUT THIS DOCUMENT
  122.  
  123. This document provides instructions for the installation and use of the NVIDIA
  124. Accelerated Linux Graphics Driver. Chapter 3, Chapter 4 and Chapter 6 walk the
  125. user through the process of downloading, installing and configuring the
  126. driver. Chapter 7 addresses frequently asked questions about the installation
  127. process, and Chapter 8 provides solutions to common problems. The remaining
  128. chapters include details on different features of the NVIDIA Linux Driver.
  129. Frequently asked questions about specific tasks are included in the relevant
  130. chapters. These pages are posted on NVIDIA's web site (http://www.nvidia.com),
  131. and are installed in '/usr/share/doc/NVIDIA_GLX-1.0/'.
  132.  
  133.  
  134. 1C. ABOUT THE AUDIENCE
  135.  
  136. It is assumed that the user and reader of this document has at least a basic
  137. understanding of Linux techniques and terminology. However, new Linux users
  138. can refer to Appendix H for details on parts of the installation process.
  139.  
  140.  
  141. 1D. ADDITIONAL INFORMATION
  142.  
  143. In case additional information is required, Chapter 29 provides contact
  144. information for NVIDIA Linux driver resources, as well as a brief listing of
  145. external resources.
  146.  
  147. ______________________________________________________________________________
  148.  
  149. Chapter 2. Minimum Software Requirements
  150. ______________________________________________________________________________
  151.  
  152.  
  153.  
  154.    Software Element         Supported versions       Check With...
  155.    ---------------------    ---------------------    ---------------------
  156.    Linux kernel             2.4.22 and newer         `cat /proc/version`
  157.    XFree86*                 4.0.1 and newer          `XFree86 -version`
  158.    X.Org*                   1.0, 1.1, 1.2, 1.3,      `Xorg -version`
  159.                             1.4, 1.5, 1.6, 1.7,  
  160.                             1.8, 1.9, 1.10, 1.11,
  161.                             1.12, 1.13, 1.14,    
  162.                             1.15                
  163.    Kernel modutils          2.1.121 and newer        `insmod -v`
  164.  
  165. * It is only required that you have one of XFree86 or X.Org, not both.
  166. Sometimes very recent versions are not supported immediately following
  167. release, but we aim to support all new versions as soon as possible.
  168.  
  169. If you need to build the NVIDIA kernel module:
  170.  
  171.    Software Element         Min Requirement          Check With...
  172.    ---------------------    ---------------------    ---------------------
  173.    binutils                 2.9.5                    `size --version`
  174.    GNU make                 3.77                     `make --version`
  175.    gcc                      2.91.66                  `gcc --version`
  176.    glibc                    2.0                      `ls /lib/libc.so.* >
  177.                                                      6`
  178.  
  179.  
  180. If you build from source RPMs:
  181.  
  182.    Required Software Element             Check With...
  183.    ----------------------------------    ----------------------------------
  184.    spec-helper rpm                       `rpm -qi spec-helper`
  185.  
  186.  
  187. All official stable kernel releases from 2.4.22 and up are supported;
  188. "prerelease" versions such as "2.6.23-rc1" are not supported, nor are
  189. development series kernels such as 2.3.x or 2.5.x. The Linux kernel can be
  190. downloaded from http://www.kernel.org or one of its mirrors.
  191.  
  192. binutils and gcc can be retrieved from http://www.gnu.org or one of its
  193. mirrors.
  194.  
  195. If you are using XFree86, but do not have a file '/var/log/XFree86.0.log',
  196. then you probably have a 3.x version of XFree86 and must upgrade.
  197.  
  198. If you are setting up XFree86 4.x for the first time, it is often easier to
  199. begin with one of the open source drivers that ships with XFree86 (either
  200. "nv", "vga" or "vesa"). Once XFree86 is operating properly with the open
  201. source driver, you may then switch to the NVIDIA driver.
  202.  
  203. Note that newer NVIDIA GPUs may not work with older versions of the "nv"
  204. driver shipped with XFree86. For example, the "nv" driver that shipped with
  205. XFree86 version 4.0.1 did not recognize the GeForce2 family and the Quadro2
  206. MXR GPUs. This was fixed in XFree86 version 4.0.2. XFree86 can be retrieved
  207. from http://www.xfree86.org.
  208.  
  209. These software packages may also be available through your Linux distributor.
  210.  
  211. ______________________________________________________________________________
  212.  
  213. Chapter 3. Selecting and Downloading the NVIDIA Packages for Your System
  214. ______________________________________________________________________________
  215.  
  216. NVIDIA drivers can be downloaded from the NVIDIA website
  217. (http://www.nvidia.com).
  218.  
  219. The NVIDIA driver follows a Unified Architecture Model in which a single
  220. graphics driver is used for all supported NVIDIA GPU products (see Appendix A
  221. for a list of supported GPUs). The burden of selecting the correct driver is
  222. removed from the user, and the graphics driver is downloaded as a single file
  223. named
  224.  
  225.     'NVIDIA-Linux-x86-173.14.39-pkg1.run'
  226.  
  227. The package suffix ('-pkg#') is used to distinguish between packages
  228. containing the same driver, but with different precompiled kernel interfaces.
  229. The file with the highest package number is suitable for most installations.
  230.  
  231. Support for "legacy" GPUs has been removed from the unified driver. These
  232. legacy GPUs will continue to be maintained through special legacy GPU driver
  233. releases. See Appendix A for a list of legacy GPUs.
  234.  
  235. The downloaded file is a self-extracting installer, and you may place it
  236. anywhere on your system.
  237.  
  238. ______________________________________________________________________________
  239.  
  240. Chapter 4. Installing the NVIDIA Driver
  241. ______________________________________________________________________________
  242.  
  243. This chapter provides instructions for installing the NVIDIA driver. Note that
  244. after installation, but prior to using the driver, you must complete the steps
  245. described in Chapter 6. Additional details that may be helpful for the new
  246. Linux user are provided in Appendix H.
  247.  
  248.  
  249. 4A. BEFORE YOU BEGIN
  250.  
  251. Before you begin the installation, exit the X server and terminate all OpenGL
  252. applications (note that it is possible that some OpenGL applications persist
  253. even after the X server has stopped). You should also set the default run
  254. level on your system such that it will boot to a VGA console, and not directly
  255. to X. Doing so will make it easier to recover if there is a problem during the
  256. installation process. See Appendix H for details.
  257.  
  258.  
  259. 4B. STARTING THE INSTALLER
  260.  
  261. After you have downloaded the file 'NVIDIA-Linux-x86-173.14.39-pkg#.run',
  262. change to the directory containing the downloaded file, and as the 'root' user
  263. run the executable:
  264.  
  265.     # cd yourdirectory
  266.     # sh NVIDIA-Linux-x86-173.14.39-pkg#.run
  267.  
  268. The '.run' file is a self-extracting archive. When executed, it extracts the
  269. contents of the archive and runs the contained 'nvidia-installer' utility,
  270. which provides an interactive interface to walk you through the installation.
  271.  
  272.  'nvidia-installer' will also install itself to '/usr/bin/nvidia-installer',
  273. which may be used at some later time to uninstall drivers, auto-download
  274. updated drivers, etc. The use of this utility is detailed later in this
  275. chapter.
  276.  
  277. You may also supply command line options to the '.run' file. Some of the more
  278. common options are listed below.
  279.  
  280. Common '.run' Options
  281.  
  282. --info
  283.  
  284.     Print embedded info about the '.run' file and exit.
  285.  
  286. --check
  287.  
  288.     Check integrity of the archive and exit.
  289.  
  290. --extract-only
  291.  
  292.     Extract the contents of './NVIDIA-Linux-x86-173.14.39.run', but do not run
  293.     'nvidia-installer'.
  294.  
  295. --help
  296.  
  297.     Print usage information for the common commandline options and exit.
  298.  
  299. --advanced-options
  300.  
  301.     Print usage information for common command line options as well as the
  302.     advanced options, and then exit.
  303.  
  304.  
  305.  
  306. 4C. INSTALLING THE KERNEL INTERFACE
  307.  
  308. The NVIDIA kernel module has a kernel interface layer that must be compiled
  309. specifically for each kernel. NVIDIA distributes the source code to this
  310. kernel interface layer, as well as precompiled versions for many of the
  311. kernels provided by popular Linux distributions.
  312.  
  313. When the installer is run, it will determine if it has a precompiled kernel
  314. interface for the kernel you are running. If it does not have one, the
  315. installer will check your system for the required kernel sources and compile
  316. the interface for you. You must have the source code for your kernel installed
  317. for compilation to work. On most systems, this means that you will need to
  318. locate and install the correct kernel-source, kernel-headers, or kernel-devel
  319. package; on some distributions, no additional packages are required (e.g.
  320. Fedora Core 3, Red Hat Enterprise Linux 4).
  321.  
  322. After the correct kernel interface has been identified (either included in the
  323. '.run' file or compiled from source code), the kernel interface will be linked
  324. with the closed-source portion of the NVIDIA kernel module. This requires that
  325. you have a linker installed on your system. The linker, usually '/usr/bin/ld',
  326. is part of the binutils package. You must have a linker installed prior to
  327. installing the NVIDIA driver.
  328.  
  329.  
  330. 4D. FEATURES OF THE INSTALLER
  331.  
  332. Without options, the '.run' file executes the installer after unpacking it.
  333. The installer can be run as a separate step in the process, or can be run at a
  334. later time to get updates, etc. Some of the more important commandline options
  335. of 'nvidia-installer' are:
  336.  
  337. 'nvidia-installer' options
  338.  
  339. --uninstall
  340.  
  341.     During installation, the installer will make backups of any conflicting
  342.     files and record the installation of new files. The uninstall option
  343.     undoes an install, restoring the system to its pre-install state.
  344.  
  345. --latest
  346.  
  347.     Connect to NVIDIA's FTP site, and report the latest driver version and the
  348.    url to the latest driver file.
  349.  
  350. --update
  351.  
  352.    Connect to NVIDIA's FTP site, download the most recent driver file, and
  353.     install it.
  354.  
  355. --ui=none
  356.  
  357.     The installer uses an ncurses-based user interface if it is able to locate
  358.     the correct ncurses library. Otherwise, it will fall back to a simple
  359.     commandline user interface. This option disables the use of the ncurses
  360.     library.
  361.  
  362.  
  363. ______________________________________________________________________________
  364.  
  365. Chapter 5. Listing of Installed Components
  366. ______________________________________________________________________________
  367.  
  368. The NVIDIA Accelerated Linux Graphics Driver consists of the following
  369. components (filenames in parenthesis are the full names of the components
  370. after installation; "x.y.z" denotes the current version. In these cases
  371. appropriate symlinks are created during installation):
  372.  
  373.    o An X driver (/usr/X11R6/lib/modules/drivers/nvidia_drv.so); this driver
  374.      is needed by the X server to use your NVIDIA hardware.
  375.  
  376.    o A GLX extension module for X
  377.      (/usr/X11R6/lib/modules/extensions/libglx.so.x.y.z); this module is used
  378.      by the X server to provide server-side GLX support.
  379.  
  380.    o An X module for wrapped software rendering
  381.      (/usr/X11R6/lib/modules/libnvidia-wfb.so.x.y.z and optionally,
  382.      /usr/X11R6/lib/modules/libwfb.so); this module is used by the X driver to
  383.      perform software rendering on GeForce 8 series GPUs. If libwfb.so already
  384.      exists, nvidia-installer will not overwrite it. Otherwise, it will create
  385.      a symbolic link from libwfb.so to libnvidia-wfb.so.x.y.z.
  386.  
  387.    o An OpenGL library (/usr/lib/libGL.so.x.y.z); this library provides the
  388.      API entry points for all OpenGL and GLX function calls. It is linked to
  389.      at run-time by OpenGL applications.
  390.  
  391.    o An OpenGL core library (/usr/lib/libGLcore.so.x.y.z); this library is
  392.      implicitly used by libGL and by libglx. It contains the core accelerated
  393.      3D functionality. You should not explicitly load it in your X config file
  394.      -- that is taken care of by libglx.
  395.  
  396.    o Two XvMC (X-Video Motion Compensation) libraries: a static library and a
  397.      shared library (/usr/X11R6/lib/libXvMCNVIDIA.a,
  398.      /usr/X11R6/lib/libXvMCNVIDIA.so.x.y.z); see Appendix G for details.
  399.  
  400.    o A kernel module (/lib/modules/`uname -r`/video/nvidia.o or
  401.      /lib/modules/`uname -r`/kernel/drivers/video/nvidia.o); this kernel
  402.      module provides low-level access to your NVIDIA hardware for all of the
  403.      above components. It is generally loaded into the kernel when the X
  404.      server is started, and is used by the X driver and OpenGL. nvidia.o
  405.      consists of two pieces: the binary-only core, and a kernel interface that
  406.      must be compiled specifically for your kernel version. Note that the
  407.      Linux kernel does not have a consistent binary interface like the X
  408.      server, so it is important that this kernel interface be matched with the
  409.      version of the kernel that you are using. This can either be accomplished
  410.      by compiling yourself, or using precompiled binaries provided for the
  411.      kernels shipped with some of the more common Linux distributions.
  412.  
  413.    o OpenGL and GLX header files (/usr/include/GL/gl.h,
  414.      /usr/include/GL/glext.h, /usr/include/GL/glx.h, and
  415.      /usr/include/GL/glext.h); these are also installed in
  416.      /usr/share/doc/NVIDIA_GLX-1.0/include/GL/. You can request that these
  417.      files not be included in /usr/include/GL/ by passing the
  418.      "--no-opengl-headers" option to the .run file during installation.
  419.  
  420.    o The nvidia-tls libraries (/usr/lib/libnvidia-tls.so.x.y.z and
  421.      /usr/lib/tls/libnvidia-tls.so.x.y.z); these files provide thread local
  422.      storage support for the NVIDIA OpenGL libraries (libGL, libGLcore, and
  423.      libglx). Each nvidia-tls library provides support for a particular thread
  424.      local storage model (such as ELF TLS), and the one appropriate for your
  425.      system will be loaded at run time.
  426.  
  427.    o The application nvidia-installer (/usr/bin/nvidia-installer) is NVIDIA's
  428.     tool for installing and updating NVIDIA drivers. See Chapter 4 for a more
  429.     thorough description.
  430.  
  431.  
  432. Problems will arise if applications use the wrong version of a library. This
  433. can be the case if there are either old libGL libraries or stale symlinks left
  434. lying around. If you think there may be something awry in your installation,
  435. check that the following files are in place (these are all the files of the
  436. NVIDIA Accelerated Linux Graphics Driver, as well as their symlinks):
  437.  
  438.    /usr/X11R6/lib/modules/drivers/nvidia_drv.so
  439.  
  440.    /usr/X11R6/lib/modules/extensions/libglx.so.x.y.z
  441.    /usr/X11R6/lib/modules/extensions/libglx.so -> libglx.so.x.y.z
  442.  
  443.    (may also be in /usr/lib/modules or /usr/lib/xorg/modules)
  444.  
  445.    /usr/lib/libGL.so.x.y.z
  446.    /usr/lib/libGL.so.x -> libGL.so.x.y.z
  447.    /usr/lib/libGL.so -> libGL.so.x
  448.  
  449.    /usr/lib/libGLcore.so.x.y.z
  450.    /usr/lib/libGLcore.so.x -> libGLcore.so.x.y.z
  451.  
  452.    /lib/modules/`uname -r`/video/nvidia.o, or
  453.    /lib/modules/`uname -r`/kernel/drivers/video/nvidia.o
  454.  
  455. If there are other libraries whose "soname" conflicts with that of the NVIDIA
  456. libraries, ldconfig may create the wrong symlinks. It is recommended that you
  457. manually remove or rename conflicting libraries (be sure to rename clashing
  458. libraries to something that ldconfig will not look at -- we have found that
  459. prepending "XXX" to a library name generally does the trick), rerun
  460. 'ldconfig', and check that the correct symlinks were made. Some libraries that
  461. often create conflicts are "/usr/X11R6/lib/libGL.so*" and
  462. "/usr/X11R6/lib/libGLcore.so*".
  463.  
  464. If the libraries appear to be correct, then verify that the application is
  465. using the correct libraries. For example, to check that the application
  466. /usr/X11R6/bin/glxgears is using the NVIDIA libraries, run:
  467.  
  468.    % ldd /usr/X11R6/bin/glxgears
  469.        linux-gate.so.1 =>  (0xffffe000)
  470.        libGL.so.1 => /usr/lib/libGL.so.1 (0xb7ed3000)
  471.        libXp.so.6 => /usr/lib/libXp.so.6 (0xb7eca000)
  472.        libXext.so.6 => /usr/lib/libXext.so.6 (0xb7eb9000)
  473.        libX11.so.6 => /usr/lib/libX11.so.6 (0xb7dd4000)
  474.        libpthread.so.0 => /lib/libpthread.so.0 (0xb7d82000)
  475.        libm.so.6 => /lib/libm.so.6 (0xb7d5f000)
  476.        libc.so.6 => /lib/libc.so.6 (0xb7c47000)
  477.        libGLcore.so.1 => /usr/lib/libGLcore.so.1 (0xb6c2f000)
  478.        libnvidia-tls.so.1 => /usr/lib/tls/libnvidia-tls.so.1 (0xb6c2d000)
  479.        libdl.so.2 => /lib/libdl.so.2 (0xb6c29000)
  480.        /lib/ld-linux.so.2 (0xb7fb2000)
  481.  
  482. Check the files being used for libGL and libGLcore -- if they are something
  483. other than the NVIDIA libraries, then you will need to either remove the
  484. libraries that are getting in the way or adjust your ld search path using the
  485. 'LD_LIBRARY_PATH' environment variable. You may want to consult the man pages
  486. for 'ldconfig' and 'ldd'.
  487.  
  488. ______________________________________________________________________________
  489.  
  490. Chapter 6. Configuring X for the NVIDIA Driver
  491. ______________________________________________________________________________
  492.  
  493. The X configuration file provides a means to configure the X server. This
  494. section describes the settings necessary to enable the NVIDIA driver. A
  495. comprehensive list of parameters is provided in Appendix B.
  496.  
  497. The NVIDIA Driver includes a utility called nvidia-xconfig, which is designed
  498. to make editing the X configuration file easy. You can also edit it by hand.
  499.  
  500.  
  501. 6A. USING NVIDIA-XCONFIG TO CONFIGURE THE X SERVER
  502.  
  503. nvidia-xconfig will find the X configuration file and modify it to use the
  504. NVIDIA X driver. In most cases, you can simply answer "Yes" when the installer
  505. asks if it should run it. If you need to reconfigure your X server later, you
  506. can run nvidia-xconfig again from a terminal. nvidia-xconfig will make a
  507. backup copy of your configuration file before modifying it.
  508.  
  509. Note that the X server must be restarted for any changes to its configuration
  510. file to take effect.
  511.  
  512. More information about nvidia-xconfig can be found in the nvidia-xconfig
  513. manual page by running.
  514.  
  515.    % man nvidia-xconfig
  516.  
  517.  
  518.  
  519.  
  520. 6B. MANUALLY EDITING THE CONFIGURATION FILE
  521.  
  522. In April 2004 the X.Org Foundation released an X server based on the XFree86
  523. server. While your release may use the X.Org X server, rather than XFree86,
  524. the differences between the two should have no impact on NVIDIA Linux users
  525. with two exceptions:
  526.  
  527.   o The X.Org configuration file is '/etc/X11/xorg.conf' while the XFree86
  528.     configuration file is '/etc/X11/XF86Config'. The files use the same
  529.     syntax. This document refers to both files as "the X config file".
  530.  
  531.   o The X.Org log file is '/var/log/Xorg.#.log' while the XFree86 log file is
  532.      '/var/log/XFree86.#.log' (where '#' is the server number -- usually 0).
  533.      The format of the log files is nearly identical. This document refers to
  534.      both files as "the X log file".
  535.  
  536. In order for any changes to be read into the X server, you must edit the file
  537. used by the server. While it is not unreasonable to simply edit both files, it
  538. is easy to determine the correct file by searching for the line
  539.  
  540.     (==) Using config file:
  541.  
  542. in the X log file. This line indicates the name of the X config file in use.
  543.  
  544. If you do not have a working X config file, there are a few different ways to
  545. obtain one. A sample config file is included both with the XFree86
  546. distribution and with the NVIDIA driver package (at
  547. '/usr/share/doc/NVIDIA_GLX-1.0/'). Tools for generating a config file (such as
  548. 'xf86config') are generally included with Linux. Additional information on the
  549. X config syntax can be found in the XF86Config manual page (`man XF86Config`
  550. or `man xorg.conf`).
  551.  
  552. If you have a working X config file for a different driver (such as the "nv"
  553. or "vesa" driver), then simply edit the file as follows.
  554.  
  555. Remove the line:
  556.  
  557.       Driver "nv"
  558.   (or Driver "vesa")
  559.   (or Driver "fbdev")
  560.  
  561. and replace it with the line:
  562.  
  563.     Driver "nvidia"
  564.  
  565. Remove the following lines:
  566.  
  567.     Load "dri"
  568.     Load "GLCore"
  569.  
  570. In the "Module" section of the file, add the line (if it does not already
  571. exist):
  572.  
  573.     Load "glx"
  574.  
  575. If the X config file does not have a "Module" section, you can safely skip the
  576. last step if the X server installed on your system is an X.Org X server or an
  577. XFree86 X release version 4.4.0 or greater. If you are using an older XFree86
  578. X server, add the following to your X config file:
  579.  
  580. Section "Module"
  581.     Load "extmod"
  582.     Load "dbe"
  583.     Load "type1"
  584.     Load "freetype"
  585.     Load "glx"
  586. EndSection
  587.  
  588. There are numerous options that may be added to the X config file to tune the
  589. NVIDIA X driver. See Appendix B for a complete list of these options.
  590.  
  591. Once you have completed these edits to the X config file, you may restart X
  592. and begin using the accelerated OpenGL libraries. After restarting X, any
  593. OpenGL application should automatically use the new NVIDIA libraries. (NOTE:
  594. If you encounter any problems, see Chapter 8 for common problem diagnoses.)
  595.  
  596. ______________________________________________________________________________
  597.  
  598. Chapter 7. Frequently Asked Questions
  599. ______________________________________________________________________________
  600.  
  601. This section provides answers to frequently asked questions associated with
  602. the NVIDIA Linux x86 Driver and its installation. Common problem diagnoses can
  603. be found in Chapter 8 and tips for new users can be found in Appendix H. Also,
  604. detailed information for specific setups is provided in the Appendices.
  605.  
  606.  
  607. NVIDIA-INSTALLER
  608.  
  609. Q. How do I extract the contents of the '.run' without actually installing the
  610.    driver?
  611.  
  612. A. Run the installer as follows:
  613.    
  614.        # sh NVIDIA-Linux-x86-173.14.39-pkg1.run --extract-only
  615.    
  616.    This will create the directory NVIDIA-Linux-x86-173.14.39-pkg1, containing
  617.    the uncompressed contents of the '.run' file.
  618.  
  619.  
  620. Q. How can I see the source code to the kernel interface layer?
  621.  
  622. A. The source files to the kernel interface layer are in the usr/src/nv
  623.    directory of the extracted .run file. To get to these sources, run:
  624.    
  625.        # sh NVIDIA-Linux-x86-1.0-6629-pkg1.run --extract-only
  626.        # cd NVIDIA-Linux-x86-1.0-6629-pkg1/usr/src/nv/
  627.    
  628.    
  629.  
  630. Q. How and when are the the NVIDIA device files created?
  631.  
  632. A. Depending on the target system's configuration, the NVIDIA device files
  633.   used to be created in one of three different ways:
  634.  
  635.      o at installation time, using mknod
  636.  
  637.      o at module load time, via devfs (Linux device file system)
  638.  
  639.      o at module load time, via hotplug/udev
  640.  
  641.   With current NVIDIA driver releases, device files are created or modified
  642.   by the X driver when the X server is started.
  643.  
  644.   By default, the NVIDIA driver will attempt to create device files with the
  645.   following attributes:
  646.  
  647.         UID:  0     - 'root'
  648.         GID:  0     - 'root'
  649.         Mode: 0666  - 'rw-rw-rw-'
  650.  
  651.   Existing device files are changed if their attributes don't match these
  652.    defaults. If you want the NVIDIA driver to create the device files with
  653.    different attributes, you can specify them with the "NVreg_DeviceFileUID"
  654.    (user), "NVreg_DeviceFileGID" (group) and "NVreg_DeviceFileMode" NVIDIA
  655.    Linux kernel module parameters.
  656.  
  657.    For example, the NVIDIA driver can be instructed to create device files
  658.    with UID=0 (root), GID=44 (video) and Mode=0660 by passing the following
  659.    module parameters to the NVIDIA Linux kernel module:
  660.    
  661.          NVreg_DeviceFileUID=0
  662.          NVreg_DeviceFileGID=44
  663.          NVreg_DeviceFileMode=0660
  664.    
  665.    The "NVreg_ModifyDeviceFiles" NVIDIA kernel module parameter will disable
  666.    dynamic device file management, if set to 0.
  667.  
  668.  
  669. Q. Why does NVIDIA not provide RPMs anymore?
  670.  
  671. A. Not every Linux distribution uses RPM, and NVIDIA wanted a single solution
  672.    that would work across all Linux distributions. As indicated in the NVIDIA
  673.    Software License, Linux distributions are welcome to repackage and
  674.    redistribute the NVIDIA Linux driver in whatever package format they wish.
  675.  
  676.  
  677. Q. Can the nvidia-installer use a proxy server?
  678.  
  679. A. Yes, because the FTP support in nvidia-installer is based on snarf, it will
  680.    honor the 'FTP_PROXY', 'SNARF_PROXY', and 'PROXY' environment variables.
  681.  
  682.  
  683. Q. What is the significance of the 'pkg#' suffix on the '.run' file?
  684.  
  685. A. The 'pkg#' suffix is used to distinguish between '.run' files containing
  686.    the same driver, but different sets of precompiled kernel interfaces. If a
  687.    distribution releases a new kernel after an NVIDIA driver is released, the
  688.    current NVIDIA driver can be repackaged to include a precompiled kernel
  689.    interface for that newer kernel (in addition to all the precompiled kernel
  690.    interfaces that were included in the previous package of the driver).
  691.  
  692.     '.run' files with the same version number, but different pkg numbers, only
  693.    differ in what precompiled kernel interfaces are included. Additionally,
  694.    '.run' files with higher pkg numbers will contain everything the '.run'
  695.    files with lower pkg numbers contain.
  696.  
  697.  
  698. Q. I have already installed NVIDIA-Linux-x86-173.14.39-pkg1.run, but I see
  699.    that NVIDIA-Linux-x86-173.14.39-pkg2.run was just posted on the NVIDIA
  700.    Linux driver download page. Should I download and install
  701.    NVIDIA-Linux-x86-173.14.39-pkg2.run?
  702.  
  703. A. This is not necessary. The driver contained within all 173.14.39 '.run'
  704.    files will be identical. There is no need to reinstall.
  705.  
  706.  
  707. Q. Can I add my own precompiled kernel interfaces to a '.run' file?
  708.  
  709. A. Yes, the --add-this-kernel  '.run' file option will unpack the '.run' file,
  710.    build a precompiled kernel interface for the currently running kernel, and
  711.    repackage the '.run' file, appending '-custom' to the filename. This may be
  712.    useful, for example. if you administer multiple Linux computers, each
  713.    running the same kernel.
  714.  
  715.  
  716. Q. Where can I find the source code for the 'nvidia-installer' utility?
  717.  
  718. A. The 'nvidia-installer' utility is released under the GPL. The latest source
  719.    code for it is available at:
  720.    ftp://download.nvidia.com/XFree86/nvidia-installer
  721.  
  722.  
  723.  
  724. NVIDIA DRIVER
  725.  
  726. Q. Where should I start when diagnosing display problems?
  727.  
  728. A. One of the most useful tools for diagnosing problems is the X log file in
  729.    '/var/log'. Lines that begin with "(II)" are information, "(WW)" are
  730.    warnings, and "(EE)" are errors. You should make sure that the correct
  731.    config file (i.e. the config file you are editing) is being used; look for
  732.    the line that begins with:
  733.    
  734.        (==) Using config file:
  735.    
  736.    Also make sure that the NVIDIA driver is being used, rather than the "nv"
  737.    or "vesa" driver. Search for
  738.    
  739.        (II) LoadModule: "nvidia"
  740.    
  741.    Lines from the driver should begin with:
  742.    
  743.        (II) NVIDIA(0)
  744.    
  745.    
  746.  
  747. Q. How can I increase the amount of data printed in the X log file?
  748.  
  749. A. By default, the NVIDIA X driver prints relatively few messages to stderr
  750.    and the X log file. If you need to troubleshoot, then it may be helpful to
  751.    enable more verbose output by using the X command line options -verbose and
  752.    -logverbose, which can be used to set the verbosity level for the 'stderr'
  753.    and log file messages, respectively. The NVIDIA X driver will output more
  754.    messages when the verbosity level is at or above 5 (X defaults to verbosity
  755.    level 1 for 'stderr' and level 3 for the log file). So, to enable verbose
  756.    messaging from the NVIDIA X driver to both the log file and 'stderr', you
  757.    could start X with the verbosity level set to 5, by doing the following
  758.    
  759.        % startx -- -verbose 5 -logverbose 5
  760.    
  761.    
  762.  
  763. Q. Where can I get 'gl.h' or 'glx.h' so I can compile OpenGL programs?
  764.  
  765. A. Most systems come with these header files preinstalled. However, NVIDIA
  766.    provides its own 'gl.h' and 'glx.h' files, which get installed by default
  767.    as part of driver installation. If you prefer that the NVIDIA-distributed
  768.    OpenGL header files not be installed, you can pass the --no-opengl-headers
  769.    option to the 'NVIDIA-Linux-x86-173.14.39-pkg1.run' file during
  770.    installation.
  771.  
  772.  
  773. Q. Can I receive email notification of new NVIDIA Accelerated Linux Graphics
  774.    Driver releases?
  775.  
  776. A. Yes. Fill out the form at: http://www.nvidia.com/view.asp?FO=driver_update
  777.  
  778.  
  779. Q. What is NVIDIA's policy towards development series Linux kernels?
  780.  
  781. A. NVIDIA does not officially support development series kernels. However, all
  782.   the kernel module source code that interfaces with the Linux kernel is
  783.   available in the 'usr/src/nv/' directory of the '.run' file. NVIDIA
  784.   encourages members of the Linux community to develop patches to these
  785.   source files to support development series kernels. A web search will most
  786.   likely yield several community supported patches.
  787.  
  788.  
  789. Q. Why does X use so much memory?
  790.  
  791. A. When measuring any application's memory usage, you must be careful to
  792.    distinguish between physical system RAM used and virtual mappings of shared
  793.    resources. For example, most shared libraries exist only once in physical
  794.    memory but are mapped into multiple processes. This memory should only be
  795.    counted once when computing total memory usage. In the same way, the video
  796.    memory on a graphics card or register memory on any device can be mapped
  797.    into multiple processes. These mappings do not consume normal system RAM.
  798.  
  799.    This has been a frequently discussed topic on XFree86 mailing lists; see,
  800.    for example:
  801.  
  802.     http://marc.theaimsgroup.com/?l=xfree-xpert&m=96835767116567&w=2
  803.  
  804.    The 'pmap' utility described in the above thread is available here:
  805.    http://web.hexapodia.org/~adi/pmap.c and is a useful tool in distinguishing
  806.    between types of memory mappings. For example, while 'top' may indicate
  807.    that X is using several hundred MB of memory, the last line of output from
  808.    pmap:
  809.    
  810.        mapped:   287020 KB writable/private: 9932 KB shared: 264656 KB
  811.    
  812.    reveals that X is really only using roughly 10MB of system RAM (the
  813.    "writable/private" value).
  814.  
  815.    Note, also, that X must allocate resources on behalf of X clients (the
  816.    window manager, your web browser, etc); X's memory usage will increase as
  817.   more clients request resources such as pixmaps, and decrease as you close X
  818.   applications.
  819.  
  820.  
  821. Q. Where can I find the tarballs?
  822.  
  823. A. Plain tarballs are no longer available. The '.run' file is a tarball with a
  824.   shell script prepended. You can execute the '.run' file with the
  825.   --extract-only option to unpack the tarball.
  826.  
  827.  
  828. Q. How do I tell if I have my kernel sources installed?
  829.  
  830. A. If you are running on a distro that uses RPM (Red Hat, Mandrake, SuSE,
  831.   etc), then you can use 'rpm' to tell you. At a shell prompt, type:
  832.  
  833.       % rpm -qa | grep kernel
  834.  
  835.   and look at the output. You should see a package that corresponds to your
  836.   kernel (often named something like kernel-2.6.15-7) and a kernel source
  837.   package with the same version (often named something like
  838.   kernel-devel-2.6.15-7 or kernel-source-2.4.22-7). If none of the lines seem
  839.   to correspond to a source package, then you will probably need to install
  840.   it. If the versions listed mismatch (e.g., kernel-2.6.15-7 vs.
  841.   kernel-devel-2.6.15-10), then you will need to update the kernel-devel
  842.   package to match the installed kernel. If you have multiple kernels
  843.   installed, you need to install the kernel-devel package that corresponds to
  844.   your RUNNING kernel (or make sure your installed source package matches the
  845.   running kernel). You can do this by looking at the output of 'uname -r' and
  846.   matching versions.
  847.  
  848.  
  849. Q. Where can I find older driver versions?
  850.  
  851. A. Please visit ftp://download.nvidia.com/XFree86_40/
  852.  
  853.  
  854. Q. What is SELinux and how does it interact with the NVIDIA driver ?
  855.  
  856. A. Security-Enhanced Linux (SELinux) is a set of modifications applied to the
  857.   Linux kernel and utilities that implement a security policy architecture.
  858.   When in use it requires that the security type on all shared libraries be
  859.   set to 'shlib_t'. The installer detects when to set the security type, and
  860.   sets it on all shared libraries it installs. The option --force-selinux
  861.   passed to the '.run' file overrides the detection of when to set the
  862.   security type.
  863.  
  864.  
  865. Q. Why do applications that use DGA graphics fail?
  866.  
  867. A. The NVIDIA driver does not support the graphics component of the
  868.   XFree86-DGA (Direct Graphics Access) extension. Applications can use the
  869.   XDGASelectInput() function to acquire relative pointer motion, but
  870.   graphics-related functions such as XDGASetMode() and XDGAOpenFramebuffer()
  871.   will fail.
  872.  
  873.   The graphics component of XFree86-DGA is not supported because it requires
  874.   a CPU mapping of framebuffer memory. As graphics cards ship with increasing
  875.   quantities of video memory, the NVIDIA X driver has had to switch to a more
  876.   dynamic memory mapping scheme that is incompatible with DGA. Furthermore,
  877.   DGA does not cooperate with other graphics rendering libraries such as Xlib
  878.   and OpenGL because it accesses GPU resources directly.
  879.  
  880.   NVIDIA recommends that applications use OpenGL or Xlib, rather than DGA,
  881.   for graphics rendering. Using rendering libraries other than DGA will yield
  882.   better performance and improve interoperability with other X applications.
  883.  
  884.  
  885. Q. My kernel log contains messages that are prefixed with "Xid"; what do these
  886.   messages mean?
  887.  
  888. A. "Xid" messages indicate that a general GPU error occurred, most often due
  889.   to the driver misprogramming the GPU or to corruption of the commands sent
  890.   to the GPU. These messages provide diagnostic information that can be used
  891.   by NVIDIA to aid in debugging reported problems.
  892.  
  893.  
  894. Q. On what NVIDIA hardware is the EXT_framebuffer_object OpenGL extension
  895.   supported?
  896.  
  897. A. EXT_framebuffer_object is supported on GeForce FX, Quadro FX, and newer
  898.   GPUs.
  899.  
  900.  
  901. Q. I use the Coolbits overclocking interface to adjust my graphics card's
  902.    clock frequencies, but the defaults are reset whenever X is restarted. How
  903.    do I make my changes persistent?
  904.  
  905. A. Clock frequency settings are not saved/restored automatically by default to
  906.    avoid potential stability and other problems that may be encountered if the
  907.    chosen frequency settings differ from the defaults qualified by the
  908.    manufacturer. You can use the command line below in '~/.xinitrc' to
  909.    automatically apply custom clock frequency settings when the X server is
  910.    started:
  911.    
  912.        # nvidia-settings -a GPUOverclockingState=1 -a
  913.    GPU2DClockFreqs=<GPU>,<MEM> -a GPU3DClockFreqs=<GPU>,<MEM>
  914.    
  915.    Here '<GPU>' and '<MEM>' are the desired GPU and video memory frequencies
  916.    (in MHz), respectively.
  917.  
  918.  
  919. Q. Why is the refresh rate not reported correctly by utilities that use the
  920.    XRandR X extension (e.g., the GNOME "Screen Resolution Preferences" panel,
  921.    `xrandr -q`, etc)?
  922.  
  923. A. The XRandR X extension is not presently aware of multiple display devices
  924.    on a single X screen; it only sees the MetaMode bounding box, which may
  925.    contain one or more actual modes. This means that if multiple MetaModes
  926.    have the same bounding box, XRandR will not be able to distinguish between
  927.    them.
  928.  
  929.    In order to support DynamicTwinView, the NVIDIA X driver must make each
  930.    MetaMode appear to be unique to XRandR. Presently, the NVIDIA X driver
  931.    accomplishes this by using the refresh rate as a unique identifier.
  932.  
  933.    You can use `nvidia-settings -q RefreshRate` to query the actual refresh
  934.    rate on each display device.
  935.  
  936.    This behavior can be disabled by setting the X configuration option
  937.    "DynamicTwinView" to FALSE.
  938.  
  939.    For details, see Chapter 13.
  940.  
  941.  
  942. Q. Why does starting certain applications result in Xlib error messages
  943.    indicating extensions like "XFree86-VidModeExtension" or "SHAPE" are
  944.    missing?
  945.  
  946. A. If your X config file has a "Module" section that does not list the
  947.    "extmod" module, some X server extensions may be missing, resulting in
  948.    error messages of the form:
  949.    
  950.    Xlib: extension "SHAPE" missing on display ":0.0"
  951.    Xlib: extension "XFree86-VidModeExtension" missing on display ":0.0"
  952.    Xlib: extension "XFree86-DGA" missing on display ":0.0"
  953.    
  954.    You can solve this problem by adding the line below to your X config file's
  955.   "Module" section:
  956.  
  957.       Load "extmod"
  958.  
  959.  
  960.  
  961. ______________________________________________________________________________
  962.  
  963. Chapter 8. Common Problems
  964. ______________________________________________________________________________
  965.  
  966. This section provides solutions to common problems associated with the NVIDIA
  967. Linux x86 Driver.
  968.  
  969. Q. My X server fails to start, and my X log file contains the error:
  970.  
  971.   (EE) NVIDIA(0): The NVIDIA kernel module does not appear to
  972.   (EE) NVIDIA(0):      be receiving interrupts generated by the NVIDIA
  973.   graphics
  974.   (EE) NVIDIA(0):      device PCI:x:x:x. Please see the COMMON PROBLEMS
  975.   (EE) NVIDIA(0):      section in the README for additional information.
  976.  
  977.  
  978. A. This can be caused by a variety of problems, such as PCI IRQ routing
  979.   errors, I/O APIC problems or conflicts with other devices sharing the IRQ
  980.   (or their drivers).
  981.  
  982.   If possible, configure your system such that your graphics card does not
  983.   share its IRQ with other devices (try moving the graphics card to another
  984.   slot if applicable, unload/disable the driver(s) for the device(s) sharing
  985.   the card's IRQ, or remove/disable the device(s)).
  986.  
  987.    Depending on the nature of the problem, one of (or a combination of) these
  988.    kernel parameters might also help:
  989.    
  990.        Parameter         Behavior
  991.        --------------    ---------------------------------------------------
  992.        pci=noacpi        don't use ACPI for PCI IRQ routing
  993.       pci=biosirq       use PCI BIOS calls to retrieve the IRQ routing
  994.                         table
  995.       noapic            don't use I/O APICs present in the system
  996.        acpi=off          disable ACPI
  997.    
  998.    
  999.  
  1000. Q. My X server fails to start, and my X log file contains the error:
  1001.    
  1002.    (EE) NVIDIA(0): The interrupt for NVIDIA graphics device PCI:x:x:x
  1003.    (EE) NVIDIA(0):      appears to be edge-triggered. Please see the COMMON
  1004.    (EE) NVIDIA(0):      PROBLEMS section in the README for additional
  1005.    information.
  1006.    
  1007.    
  1008. A. An edge-triggered interrupt means that the kernel has programmed the
  1009.    interrupt as edge-triggered rather than level-triggered in the Advanced
  1010.    Programmable Interrupt Controller (APIC). Edge-triggered interrupts are not
  1011.    intended to be used for sharing an interrupt line between multiple devices;
  1012.    level-triggered interrupts are the intended trigger for such usage. When
  1013.    using edge-triggered interrupts, it is common for device drivers using that
  1014.    interrupt line to stop receiving interrupts. This would appear to the end
  1015.    user as those devices no longer working, and potentially as a full system
  1016.    hang. These problems tend to be more common when multiple devices are
  1017.    sharing that interrupt line.
  1018.  
  1019.    This occurs when ACPI is not used to program interrupt routing in the APIC.
  1020.    This often occurs on 2.4 Linux kernels, which do not fully support ACPI, or
  1021.    2.6 kernels when ACPI is disabled or fails to initialize. In these cases,
  1022.    the Linux kernel falls back to tables provided by the system BIOS. In some
  1023.    cases the system BIOS assumes ACPI will be used for routing interrupts and
  1024.    configures these tables to incorrectly label all interrupts as
  1025.    edge-triggered. The current interrupt configuration can be found in
  1026.    /proc/interrupts.
  1027.  
  1028.    Available workarounds include: updating to a newer system BIOS, trying a
  1029.    2.6 kernel with ACPI enabled, or passing the 'noapic' option to the kernel
  1030.    to force interrupt routing through the traditional Programmable Interrupt
  1031.    Controller (PIC). Newer kernels also provide an interrupt polling mechanism
  1032.    to attempt to work around this problem. This mechanism can be enabled by
  1033.    passing the 'irqpoll' option to the kernel.
  1034.  
  1035.    Currently, the NVIDIA driver will attempt to detect edge triggered
  1036.    interrupts and X will purposely fail to start (to avoid stability issues).
  1037.    This behavior can be overridden by setting the "NVreg_RMEdgeIntrCheck"
  1038.    NVIDIA Linux kernel module parameter. This parameter defaults to "1", which
  1039.    enables the edge triggered interrupt detection. Set this parameter to "0"
  1040.    to disable this detection.
  1041.  
  1042.  
  1043. Q. X starts for me, but OpenGL applications terminate immediately.
  1044.  
  1045. A. If X starts but you have trouble with OpenGL, you most likely have a
  1046.    problem with other libraries in the way, or there are stale symlinks. See
  1047.    Chapter 5 for details. Sometimes, all it takes is to rerun 'ldconfig'.
  1048.  
  1049.    You should also check that the correct extensions are present;
  1050.    
  1051.        % xdpyinfo
  1052.    
  1053.    should show the "GLX" and "NV-GLX" extensions present. If these two
  1054.    extensions are not present, then there is most likely a problem loading the
  1055.    glx module, or it is unable to implicitly load GLcore. Check your X config
  1056.    file and make sure that you are loading glx (see Chapter 6). If your X
  1057.    config file is correct, then check the X log file for warnings/errors
  1058.    pertaining to GLX. Also check that all of the necessary symlinks are in
  1059.    place (refer to Chapter 5).
  1060.  
  1061.  
  1062. Q. When Xinerama is enabled, my stereo glasses are shuttering only when the
  1063.    stereo application is displayed on one specific X screen. When the
  1064.    application is displayed on the other X screens, the stereo glasses stop
  1065.    shuttering.
  1066.  
  1067. A. This problem occurs with DDC and "blue line" stereo glasses, that get the
  1068.    stereo signal from one video port of the graphics card. When a X screen
  1069.    does not display any stereo drawable the stereo signal is disabled on the
  1070.    associated video port.
  1071.  
  1072.    Forcing stereo flipping allows the stereo glasses to shutter continuously.
  1073.    This can be done by enabling the OpenGL control "Force Stereo Flipping" in
  1074.    nvidia-settings, or by setting the X configuration option
  1075.    "ForceStereoFlipping" to "1".
  1076.  
  1077.  
  1078. Q. Stereo is not in sync across multiple displays.
  1079.  
  1080. A. There are two cases where this may occur. If the displays are attached to
  1081.    the same GPU, and one of them is out of sync with the stereo glasses, you
  1082.    will need to reconfigure your monitors to drive identical mode timings; see
  1083.    Chapter 19 for details.
  1084.  
  1085.    If the displays are attached to different GPUs, the only way to synchronize
  1086.    stereo across the displays is with a G-Sync device, which is only supported
  1087.    by certain Quadro cards. See Chapter 26 for details. This applies to
  1088.    seperate GPUs on seperate cards as well as seperate GPUs on the same card,
  1089.    such as Quadro FX 4500 X2. Note that the Quadro FX 4500 X2 only provides a
  1090.    single DIN connector for stereo, tied to the bottommost GPU. In order to
  1091.    synchronize onboard stereo on the other GPU you must use a G-Sync device.
  1092.  
  1093.  
  1094. Q. I just upgraded my kernel, and now the NVIDIA kernel module will not load.
  1095.  
  1096. A. The kernel interface layer of the NVIDIA kernel module must be compiled
  1097.    specifically for the configuration and version of your kernel. If you
  1098.    upgrade your kernel, then the simplest solution is to reinstall the driver.
  1099.  
  1100.    ADVANCED: You can install the NVIDIA kernel module for a non running kernel
  1101.    (for example: in the situation where you just built and installed a new
  1102.    kernel, but have not rebooted yet) with a command line such as this:
  1103.    
  1104.        # sh NVIDIA-Linux-x86-173.14.39-pkg1.run --kernel-name='KERNEL_NAME'
  1105.    
  1106.    
  1107.    Where 'KERNEL_NAME' is what 'uname -r' would report if the target kernel
  1108.    were running.
  1109.  
  1110.  
  1111. Q. My X server fails to start, and my X log file contains the error:
  1112.    
  1113.    (EE) NVIDIA(0): Failed to load the NVIDIA kernel module!
  1114.    
  1115.    
  1116. A. The X driver will abort with this error message if the NVIDIA kernel module
  1117.    fails to load. If you receive this error, you should check the output of
  1118.    `dmesg` for kernel error messages and/or attempt to load the kernel module
  1119.    explicitly with `modprobe nvidia`. If unresolved symbols are reported, then
  1120.    the kernel module was most likely built against a Linux kernel source tree
  1121.    (or kernel headers) for a kernel revision or configuration that doesn't
  1122.   match the running kernel.
  1123.  
  1124.   You can specify the location of the kernel source tree (or headers) when
  1125.   you install the NVIDIA driver using the --kernel-source-path command line
  1126.   option (see `sh NVIDIA-Linux-x86-173.14.39-pkg1.run --advanced-options` for
  1127.   details).
  1128.  
  1129.   Old versions of the module-init-tools include `modprobe` binaries that
  1130.   report an error when instructed to load a module that's already loaded into
  1131.    the kernel. Please upgrade your module-init-tools if you receive an error
  1132.    message to this effect.
  1133.  
  1134.    The X server reads '/proc/sys/kernel/modprobe' to determine the path to the
  1135.    `modprobe` utility and falls back to '/sbin/modprobe' if the file doesn't
  1136.   exist. Please make sure that this path is valid and refers to a `modprobe`
  1137.   binary compatible with the Linux kernel running on your system.
  1138.  
  1139.   The "LoadKernelModule" X driver option can be used to change the default
  1140.   behavior and disable kernel module auto-loading.
  1141.  
  1142.  
  1143. Q. Installing the NVIDIA kernel module gives an error message like:
  1144.  
  1145.   #error Modules should never use kernel-headers system headers
  1146.   #error but headers from an appropriate kernel-source
  1147.  
  1148.  
  1149. A. You need to install the source for the Linux kernel. In most situations you
  1150.   can fix this problem by installing the kernel-source or kernel-devel
  1151.   package for your distribution
  1152.  
  1153.  
  1154. Q. OpenGL applications crash and print out the following warning:
  1155.  
  1156.   WARNING: Your system is running with a buggy dynamic loader.
  1157.   This may cause crashes in certain applications.  If you
  1158.   experience crashes you can try setting the environment
  1159.   variable __GL_SINGLE_THREADED to 1.  For more information,
  1160.   consult the FREQUENTLY ASKED QUESTIONS section in
  1161.   the file /usr/share/doc/NVIDIA_GLX-1.0/README.txt.
  1162.  
  1163.  
  1164. A. The dynamic loader on your system has a bug which will cause applications
  1165.   linked with pthreads, and that dlopen() libGL multiple times, to crash.
  1166.   This bug is present in older versions of the dynamic loader. Distributions
  1167.   that shipped with this loader include but are not limited to Red Hat Linux
  1168.   6.2 and Mandrake Linux 7.1. Version 2.2 and later of the dynamic loader are
  1169.   known to work properly. If the crashing application is single threaded then
  1170.   setting the environment variable '__GL_SINGLE_THREADED' to "1" will prevent
  1171.   the crash. In the bash shell you would enter:
  1172.  
  1173.       % export __GL_SINGLE_THREADED=1
  1174.  
  1175.   and in csh and derivatives use:
  1176.  
  1177.       % setenv __GL_SINGLE_THREADED 1
  1178.  
  1179.   Previous releases of the NVIDIA Accelerated Linux Graphics Driver attempted
  1180.   to work around this problem. Unfortunately, the workaround caused problems
  1181.   with other applications and was removed after version 1.0-1541.
  1182.  
  1183.  
  1184. Q. Quake3 crashes when changing video modes.
  1185.  
  1186. A. You are probably experiencing a problem described above. Please check the
  1187.   text output for the "WARNING" message described in the previous hint.
  1188.   Setting '__GL_SINGLE_THREADED' to "1" as will fix the problem.
  1189.  
  1190.  
  1191. Q. I cannot build the NVIDIA kernel module, or, I can build the NVIDIA kernel
  1192.   module, but modprobe/insmod fails to load the module into my kernel.
  1193.  
  1194. A. These problems are generally caused by the build using the wrong kernel
  1195.   header files (i.e. header files for a different kernel version than the one
  1196.   you are running). The convention used to be that kernel header files should
  1197.   be stored in '/usr/include/linux/', but that is deprecated in favor of
  1198.   '/lib/modules/RELEASE/build/include' (where RELEASE is the result of 'uname
  1199.    -r'. The 'nvidia-installer' should be able to determine the location on
  1200.   your system; however, if you encounter a problem you can force the build to
  1201.   use certain header files by using the --kernel-include-dir option. For this
  1202.   to work you will of course need the appropriate kernel header files
  1203.   installed on your system. Consult the documentation that came with your
  1204.   distribution; some distributions do not install the kernel header files by
  1205.   default, or they install headers that do not coincide properly with the
  1206.   kernel you are running.
  1207.  
  1208.  
  1209. Q. There are problems running Heretic II.
  1210.  
  1211. A. Heretic II installs, by default, a symlink called 'libGL.so' in the
  1212.   application directory. You can remove or rename this symlink, since the
  1213.   system will then find the default 'libGL.so' (which our drivers install in
  1214.   '/usr/lib'). From within Heretic II you can then set your render mode to
  1215.   OpenGL in the video menu. There is also a patch available to Heretic II
  1216.   from lokigames at: http://www.lokigames.com/products/heretic2/updates.php3/
  1217.  
  1218.  
  1219. Q. My system hangs when switching to a virtual terminal if I have rivafb
  1220.   enabled.
  1221.  
  1222. A. Using both rivafb and the NVIDIA kernel module at the same time is
  1223.   currently broken. In general, using two independent software drivers to
  1224.   drive the same piece of hardware is a bad idea.
  1225.  
  1226.  
  1227. Q. Compiling the NVIDIA kernel module gives this error:
  1228.  
  1229.   You appear to be compiling the NVIDIA kernel module with
  1230.   a compiler different from the one that was used to compile
  1231.   the running kernel. This may be perfectly fine, but there
  1232.   are cases where this can lead to unexpected behavior and
  1233.   system crashes.
  1234.  
  1235.   If you know what you are doing and want to override this
  1236.   check, you can do so by setting IGNORE_CC_MISMATCH.
  1237.  
  1238.   In any other case, set the CC environment variable to the
  1239.   name of the compiler that was used to compile the kernel.
  1240.  
  1241.  
  1242. A. You should compile the NVIDIA kernel module with the same compiler version
  1243.   that was used to compile your kernel. Some Linux kernel data structures are
  1244.   dependent on the version of gcc used to compile it; for example, in
  1245.   'include/linux/spinlock.h':
  1246.  
  1247.           ...
  1248.           * Most gcc versions have a nasty bug with empty initializers.
  1249.           */
  1250.           #if (__GNUC__ > 2)
  1251.             typedef struct { } rwlock_t;
  1252.             #define RW_LOCK_UNLOCKED (rwlock_t) { }
  1253.           #else
  1254.             typedef struct { int gcc_is_buggy; } rwlock_t;
  1255.             #define RW_LOCK_UNLOCKED (rwlock_t) { 0 }
  1256.           #endif
  1257.  
  1258.   If the kernel is compiled with gcc 2.x, but gcc 3.x is used when the kernel
  1259.   interface is compiled (or vice versa), the size of rwlock_t will vary, and
  1260.   things like ioremap will fail. To check what version of gcc was used to
  1261.   compile your kernel, you can examine the output of:
  1262.  
  1263.       % cat /proc/version
  1264.  
  1265.   To check what version of gcc is currently in your '$PATH', you can examine
  1266.   the output of:
  1267.  
  1268.       % gcc -v
  1269.  
  1270.  
  1271.  
  1272. Q. X fails with error
  1273.  
  1274.   Failed to allocate LUT context DMA
  1275.  
  1276.  
  1277. A. This is one of the possible consequences of compiling the NVIDIA kernel
  1278.   interface with a different gcc version than used to compile the Linux
  1279.   kernel (see above).
  1280.  
  1281.  
  1282. Q. I recently updated various libraries on my system using my Linux
  1283.   distributor's update utility, and the NVIDIA graphics driver no longer
  1284.    works.
  1285.  
  1286. A. Conflicting libraries may have been installed by your distribution's update
  1287.   utility; see Chapter 5 for details on how to diagnose this.
  1288.  
  1289.  
  1290. Q. I have rebuilt the NVIDIA kernel module, but when I try to insert it, I get
  1291.   a message telling me I have unresolved symbols.
  1292.  
  1293. A. Unresolved symbols are most often caused by a mismatch between your kernel
  1294.   sources and your running kernel. They must match for the NVIDIA kernel
  1295.   module to build correctly. Make sure your kernel sources are installed and
  1296.   configured to match your running kernel.
  1297.  
  1298.  
  1299. Q. OpenGL applications leak significant amounts of memory on my system!
  1300.  
  1301. A. If your kernel is making use of the -rmap VM, the system may be leaking
  1302.   memory due to a memory management optimization introduced in -rmap14a. The
  1303.   -rmap VM has been adopted by several popular distributions, the memory leak
  1304.   is known to be present in some of the distribution kernels; it has been
  1305.   fixed in -rmap15e.
  1306.  
  1307.   If you suspect that your system is affected, try upgrading your kernel or
  1308.   contact your distribution's vendor for assistance.
  1309.  
  1310.  
  1311. Q. Some OpenGL applications (like Quake3 Arena) crash when I start them on Red
  1312.    Hat Linux 9.0.
  1313.  
  1314. A. Some versions of the glibc package shipped by Red Hat that support TLS do
  1315.    not properly handle using dlopen() to access shared libraries which use
  1316.    some TLS models. This problem is exhibited, for example, when Quake3 Area
  1317.    dlopen() 's NVIDIA's libGL library. Please obtain at least glibc-2.3.2-11.9
  1318.    which is available as an update from Red Hat.
  1319.  
  1320.  
  1321. Q. I have installed the driver, but my Enable 3D Acceleration checkbox is
  1322.    still grayed out.
  1323.  
  1324. A. Most distribution-provided configuration applets are not aware of the
  1325.    NVIDIA accelerated driver, and consequently will not update themselves when
  1326.    you install the driver. Your driver, if it has been installed properly,
  1327.    should function fine.
  1328.  
  1329.  
  1330. Q. When changing settings in games like Quake 3 Arena, or Wolfenstein Enemy
  1331.    Territory, the game crashes and I see this error:
  1332.    
  1333.    ...loading libGL.so.1: QGL_Init: dlopen libGL.so.1 failed:
  1334.    /usr/lib/tls/libGL.so.1: shared object cannot be dlopen()ed:
  1335.    static TLS memory too small
  1336.    
  1337.    
  1338. A. These games close and reopen the NVIDIA OpenGL driver (via dlopen() /
  1339.    dlclose()) when settings are changed. On some versions of glibc (such as
  1340.    the one shipped with Red Hat Linux 9), there is a bug that leaks static TLS
  1341.    entries. This glibc bug causes subsequent re-loadings of the OpenGL driver
  1342.    to fail. This is fixed in more recent versions of glibc; see Red Hat bug
  1343.    #89692: https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=89692
  1344.  
  1345.  
  1346. Q. X crashes during 'startx', and my X log file contains this error message:
  1347.    
  1348.    (EE) NVIDIA(0): Failed to obtain a shared memory identifier.
  1349.    
  1350.    
  1351. A. The NVIDIA OpenGL driver and the NVIDIA X driver require shared memory to
  1352.    communicate; you must have 'CONFIG_SYSVIPC' enabled in your kernel.
  1353.  
  1354.  
  1355. Q. When I try to install the driver, the installer claims that X is running,
  1356.    even though I have exited X.
  1357.  
  1358. A. The installer detects the presence of an X server by checking for X's lock
  1359.   files: '/tmp/.Xn-lock', where 'n' is the number of the X Display (the
  1360.   installer checks for X Displays 0-7). If you have exited X, but one of
  1361.   these files has been left behind, then you will need to manually delete the
  1362.   lock file. DO NOT remove this file if X is still running!
  1363.  
  1364.  
  1365. Q. My system runs, but seems unstable.
  1366.  
  1367. A. Your stability problems may be AGP-related. See Chapter 12 for details.
  1368.  
  1369.  
  1370. Q. OpenGL applications are running slowly
  1371.  
  1372. A. The application is probably using a different library that still remains on
  1373.   your system, rather than the NVIDIA supplied OpenGL library. See Chapter 5
  1374.   for details.
  1375.  
  1376.  
  1377. Q. There are problems running Quake2.
  1378.  
  1379. A. Quake2 requires some minor setup to get it going. First, in the Quake2
  1380.   directory, the install creates a symlink called 'libGL.so' that points at
  1381.   'libMesaGL.so'. This symlink should be removed or renamed. Second, in order
  1382.   to run Quake2 in OpenGL mode, you must type
  1383.  
  1384.       % quake2 +set vid_ref glx +set gl_driver libGL.so
  1385.  
  1386.   Quake2 does not seem to support any kind of full-screen mode, but you can
  1387.   run your X server at the same resolution as Quake2 to emulate full-screen
  1388.   mode.
  1389.  
  1390.  
  1391. Q. I am using either nForce of nForce2 internal graphics, and I see warnings
  1392.   like this in my X log file:
  1393.  
  1394.   Not using mode "1600x1200" (exceeds valid memory bandwidth usage)
  1395.  
  1396.  
  1397. A. Integrated graphics have more strict memory bandwidth limitations that
  1398.   limit the resolution and refresh rate of the modes you request. To work
  1399.   around this, you can reduce the maximum refresh rate by lowering the upper
  1400.   value of the VertRefresh range in the 'Monitor' section of your X config
  1401.   file. Though not recommended, you can disable the memory bandwidth test
  1402.   with the NoBandWidthTest X config file option.
  1403.  
  1404.  
  1405. Q. X takes a long time to start (possibly several minutes).
  1406.  
  1407. A. Most of the X startup delay problems we have found are caused by incorrect
  1408.   data in video BIOSes about what display devices are possibly connected or
  1409.   what i2c port should be used for detection. You can work around these
  1410.   problems with the X config option IgnoreDisplayDevices (see the description
  1411.   in Appendix B).
  1412.  
  1413.  
  1414. Q. Fonts are incorrectly sized after installing the NVIDIA driver.
  1415.  
  1416. A. Incorrectly sized fonts are generally caused by incorrect DPI (Dots Per
  1417.   Inch) information. You can check what X thinks the physical size of your
  1418.   monitor is, by running:
  1419.  
  1420.    % xdpyinfo | grep dimensions
  1421.  
  1422.   This will report the size in pixels, and in millimeters.
  1423.  
  1424.   If these numbers are wrong, you can correct them by modifying the X
  1425.   server's DPI setting. See Appendix E for details.
  1426.  
  1427.  
  1428. Q. General problems with ALi chipsets
  1429.  
  1430. A. There are some known timing and signal integrity issues on ALi chipsets.
  1431.    The following tips may help stabilize problematic ALI systems:
  1432.    
  1433.       o Disable TURBO AGP MODE in the BIOS.
  1434.    
  1435.       o When using a P5A upgrade to BIOS Revision 1002 BETA 2.
  1436.    
  1437.       o When using 1007, 1007A or 1009 adjust the IO Recovery Time to 4
  1438.         cycles.
  1439.    
  1440.       o AGP is disabled by default on some ALi chipsets (ALi1541, ALi1647) to
  1441.         work around severe system stability problems with these chipsets. See
  1442.         the comments for EnableALiAGP in 'nv-reg.h' to force AGP on anyway.
  1443.    
  1444.    
  1445.  
  1446. Q. Using GNOME configuration utilities, I am unable to get a resolution above
  1447.    800x600.
  1448.  
  1449. A. The installation of GNOME provided in operating systems such as Red Hat
  1450.    Enterprise Linux 4 and Solaris 10 Update 2 contain several competing
  1451.    interfaces for specifying resolution:
  1452.    
  1453.    
  1454.        'System Settings' -> 'Display'
  1455.    
  1456.    
  1457.    which will update the X configuration file, and
  1458.    
  1459.    
  1460.        'Applications' -> 'Preferences' -> 'Screen Resolution'
  1461.    
  1462.    
  1463.    which will update the per-user screen resolution using the XRandR
  1464.    extension. Your desktop resolution will be limited to the smaller of the
  1465.    two settings. Be sure to check the setting of each.
  1466.  
  1467.  
  1468. Q. X does not restore the VGA console when run on a TV. I get this error
  1469.    message in my X log file:
  1470.    
  1471.    Unable to initialize the X int10 module; the console may not be
  1472.    restored correctly on your TV.
  1473.    
  1474.    
  1475. A. The NVIDIA X driver uses the X Int10 module to save and restore console
  1476.    state on TV out, and will not be able to restore the console correctly if
  1477.    it cannot use the Int10 module. If you have built the X server yourself,
  1478.    please be sure you have built the Int10 module. If you are using a build of
  1479.    the X server provided by your operating system and are missing the Int10
  1480.    module, contact your operating system distributor.
  1481.  
  1482.  
  1483. Q. OpenGL applications don't work, and my X log file contains the error:
  1484.  
  1485.   (EE) NVIDIA(0): Unable to map device node /dev/zero with read, write, and
  1486.   (EE) NVIDIA(0):     execute privileges.  The GLX extension will be disabled
  1487.   (EE) NVIDIA(0):     on this X screen.  Please see the COMMON PROBLEMS
  1488.   (EE) NVIDIA(0):     section in the README for more information.
  1489.  
  1490.  
  1491. A. The NVIDIA OpenGL driver must be able to map the '/dev/zero' device node
  1492.   with read, write, and execute privileges in order to function correctly.
  1493.   The driver needs this ability to allocate executable memory, which is used
  1494.   for optimizations that require generating code at run-time. Currently, GLX
  1495.   cannot run without these optimizations.
  1496.  
  1497.   Check that your '/dev' filesystem is set up correctly. In particular,
  1498.   mounting the '/dev' file system with the 'noexec' option will cause this to
  1499.   happen. If you haven't changed the configuration of your '/dev' filesystem,
  1500.    please contact your operating system distributor.
  1501.  
  1502.  
  1503. ______________________________________________________________________________
  1504.  
  1505. Chapter 9. Known Issues
  1506. ______________________________________________________________________________
  1507.  
  1508. The following problems still exist in this release and are in the process of
  1509. being resolved.
  1510.  
  1511. Known Issues
  1512.  
  1513. OpenGL and dlopen()
  1514.  
  1515.     There are some issues with older versions of the glibc dynamic loader
  1516.     (e.g., the version that shipped with Red Hat Linux 7.2) and applications
  1517.     such as Quake3 and Radiant, that use dlopen(). See Chapter 7 for more
  1518.     details.
  1519.  
  1520. Multicard, Multimonitor
  1521.  
  1522.     In some cases, the secondary card is not initialized correctly by the
  1523.     NVIDIA kernel module. You can work around this by enabling the XFree86
  1524.     Int10 module to soft-boot all secondary cards. See Appendix B for details.
  1525.  
  1526. Interaction with pthreads
  1527.  
  1528.     Single-threaded applications that use dlopen() to load NVIDIA's libGL
  1529.    library, and then use dlopen() to load any other library that is linked
  1530.    against libpthread will crash in libGL. This does not happen in NVIDIA's
  1531.     new ELF TLS OpenGL libraries (see Chapter 5 for a description of the ELF
  1532.     TLS OpenGL libraries). Possible workarounds for this problem are:
  1533.    
  1534.       1. Load the library that is linked with libpthread before loading libGL.
  1535.    
  1536.       2. Link the application with libpthread.
  1537.    
  1538.    
  1539. The X86-64 platform (AMD64/EM64T) and 2.6 kernels
  1540.  
  1541.     Many 2.4 and 2.6 x86_64 kernels have an accounting problem in their
  1542.     implementation of the change_page_attr kernel interface. Early 2.6 kernels
  1543.     include a check that triggers a BUG() when this situation is encountered
  1544.     (triggering a BUG() results in the current application being killed by the
  1545.     kernel; this application would be your OpenGL application or potentially
  1546.     the X server). The accounting issue has been resolved in the 2.6.11
  1547.     kernel.
  1548.  
  1549.     We have added checks to recognize that the NVIDIA kernel module is being
  1550.     compiled for the x86-64 platform on a kernel between 2.6.0 and 2.6.11. In
  1551.     this case, we will disable usage of the change_page_attr kernel interface.
  1552.     This will avoid the accounting issue but leaves the system in danger of
  1553.     cache aliasing (see entry below on Cache Aliasing for more information
  1554.     about cache aliasing). Note that this change_page_attr accounting issue
  1555.     and BUG() can be triggered by other kernel subsystems that rely on this
  1556.     interface.
  1557.  
  1558.     If you are using a 2.6 x86_64 kernel, it is recommended that you upgrade
  1559.     to a 2.6.11 or later kernel.
  1560.  
  1561.     Also take note of common dma issues on 64-bit platforms in Chapter 10.
  1562.  
  1563. Cache Aliasing
  1564.  
  1565.     Cache aliasing occurs when multiple mappings to a physical page of memory
  1566.     have conflicting caching states, such as cached and uncached. Due to these
  1567.     conflicting states, data in that physical page may become corrupted when
  1568.     the processor's cache is flushed. If that page is being used for DMA by a
  1569.    driver such as NVIDIA's graphics driver, this can lead to hardware
  1570.     stability problems and system lockups.
  1571.  
  1572.     NVIDIA has encountered bugs with some Linux kernel versions that lead to
  1573.     cache aliasing. Although some systems will run perfectly fine when cache
  1574.     aliasing occurs, other systems will experience severe stability problems,
  1575.     including random lockups. Users experiencing stability problems due to
  1576.     cache aliasing will benefit from updating to a kernel that does not cause
  1577.     cache aliasing to occur.
  1578.  
  1579.     NVIDIA has added driver logic to detect cache aliasing and to print a
  1580.     warning with a message similar to the following:
  1581.    
  1582.     NVRM: bad caching on address 0x1cdf000: actual 0x46 != expected 0x73
  1583.    
  1584.     If you see this message in your log files and are experiencing stability
  1585.     problems, you should update your kernel to the latest version.
  1586.  
  1587.     If the message persists after updating your kernel, send a bug report to
  1588.     NVIDIA.
  1589.  
  1590. 64-Bit BARs (Base Address Registers)
  1591.  
  1592.     Starting with native PCI Express GPUs, NVIDIA's GPUs will advertise a
  1593.    64-bit BAR capability (a Base Address Register stores the location of a
  1594.    PCI I/O region, such as registers or a frame buffer). This means that the
  1595.    GPU's PCI I/O regions (registers and frame buffer) can be placed above the
  1596.     32-bit address space (the first 4 gigabytes of memory).
  1597.  
  1598.     The decision of where the BAR is placed is made by the system BIOS at boot
  1599.     time. If the BIOS supports 64-bit BARs, then the NVIDIA PCI I/O regions
  1600.     may be placed above the 32-bit address space. If the BIOS does not support
  1601.     this feature, then our PCI I/O regions will be placed within the 32-bit
  1602.     address space as they have always been.
  1603.  
  1604.     Unfortunately, current Linux kernels (as of 2.6.11.x) do not understand or
  1605.     support 64-bit BARs. If the BIOS does place any NVIDIA PCI I/O regions
  1606.     above the 32-bit address space, the kernel will reject the BAR and the
  1607.     NVIDIA driver will not work.
  1608.  
  1609.     There is no known workaround at this point.
  1610.  
  1611. Kernel virtual address space exhaustion on the X86 platform
  1612.  
  1613.     On X86 systems and AMD64/EM64T systems using X86 kernels, only 4GB of
  1614.     virtual address space are available, which the Linux kernel typically
  1615.     partitions such that user processes are allocated 3GB, the kernel itself
  1616.     1GB. Part of the kernel's share is used to create a direct mapping of
  1617.    system memory (RAM). Depending on how much system memory is installed, the
  1618.    kernel virtual address space remaining for other uses varies in size and
  1619.    may be as small as 128MB, if 1GB of system memory (or more) are installed.
  1620.    By default, the kernel reserves a minimum of 128MB.
  1621.  
  1622.    The kernel virtual address space still available after the creation of the
  1623.    direct system memory mapping is used by both the kernel and by drivers to
  1624.    map I/O resources, and for some memory allocations. Depending on the
  1625.    number of consumers and their respective requirements, the Linux kernel's
  1626.     virtual address space may be exhausted. Newer Linux kernels print an error
  1627.     message of the form below when this happens:
  1628.    
  1629.     allocation failed: out of vmalloc space - use vmalloc=<size> to increase
  1630.     size.
  1631.    
  1632.    
  1633.     The NVIDIA kernel module requires portions of the kernel's virtual address
  1634.    space for each GPU and for certain memory allocations. If no more than
  1635.    128MB are available to the kernel and device drivers at boot time, the
  1636.    NVIDIA kernel module may be unable to initialize all GPUs, or fail memory
  1637.    allocations. This is not usually a problem with only 1 or 2 GPUs, however
  1638.    depending on the number of other drivers and their usage patterns, it can
  1639.    be; it is likely to be a problem with 3 or more GPUs.
  1640.  
  1641.    Possible solutions for this problem include:
  1642.    
  1643.       o If available, the 'vmalloc' kernel parameter can be used to increase
  1644.         the size of the kernel virtual address space reserved by the Linux
  1645.         kernel (the default is 128MB). Incrementally raising this to find the
  1646.         best balance between the size of the kernel virtual address space
  1647.         made available and the size of the direct system memory mapping is
  1648.         recommended. You can achieve this by passing 'vmalloc=192M',
  1649.         'vmalloc=256MB', ..., to the kernel and checking if the above error
  1650.         message continues to be printed.
  1651.    
  1652.         Note that some versions of the GRUB boot loader have problems
  1653.         calculating the memory layout and loading the initrd if the 'vmalloc'
  1654.         kernel parameter is used. The 'uppermem' GRUB command can be used to
  1655.         force GRUB to load the initrd into a lower region of system memory to
  1656.         work around this problem. This will not adversely affect system
  1657.         performance once the kernel has been loaded. The suggested syntax is:
  1658.        
  1659.         title     Kernel Title
  1660.         uppermem  524288
  1661.         kernel    (hdX,Y)/boot/vmlinuz...
  1662.        
  1663.        
  1664.         Also note that the 'vmalloc' kernel parameter only exists on Linux
  1665.         2.6.9 and later kernels. On older kernels, the amount of system
  1666.         memory used by the kernel can be reduced with the 'mem' kernel
  1667.         parameter, which also reduces the size of the direct mapping and thus
  1668.         increases the size of the kernel virtual address space available. For
  1669.         example, 'mem=512M' instructs the kernel to ignore all but the first
  1670.         512MB of system memory. Although it is undesirable to reduce the
  1671.         amount of usable system memory, this approach can be used to check if
  1672.         initialization problems are caused by kernel virtual address space
  1673.         exhaustion.
  1674.    
  1675.       o In some cases, disabling frame buffer drivers such as vesafb can
  1676.         help, as such drivers may attempt to map all or a large part of the
  1677.         installed graphics cards' video memory into the kernel's virtual
  1678.         address space, which rapidly consumes this resource. You can disable
  1679.         the vesafb frame buffer driver by passing these parameters to the
  1680.         kernel: 'video=vesa:off vga=normal'.
  1681.    
  1682.       o Some Linux kernels can be configured with alternate address space
  1683.         layouts (e.g. 2.8GB:1.2GB, 2GB:2GB, etc.). This option can be used to
  1684.         avoid exhaustion of the kernel virtual address space without reducing
  1685.         the size of the direct system memory mapping. Some Linux distributors
  1686.         also provide kernels that use seperate 4GB address spaces for user
  1687.         processes and the kernel. Such Linux kernels provide sufficient
  1688.         kernel virtual address space on typical systems.
  1689.    
  1690.       o If your system is equipped with an X86-64 (AMD64/EM64T) processor, it
  1691.         is recommended that you switch to a 64-bit Linux kernel/distribution.
  1692.         Due to the significantly larger address space provided by the X86-64
  1693.         processors' addressing capabilities, X86-64 kernels will not run out
  1694.          of kernel virtual address space in the foreseeable future.
  1695.    
  1696.    
  1697. Valgrind
  1698.  
  1699.     The NVIDIA OpenGL implementation makes use of self modifying code. To
  1700.     force Valgrind to retranslate this code after a modification you must run
  1701.     using the Valgrind command line option:
  1702.    
  1703.     --smc-check=all
  1704.    
  1705.     Without this option Valgrind may execute incorrect code causing incorrect
  1706.     behavior and reports of the form:
  1707.    
  1708.     ==30313== Invalid write of size 4
  1709.    
  1710.    
  1711. MMConfig-based PCI Configuration Space Accesses
  1712.  
  1713.     2.6 kernels have added support for Memory-Mapped PCI Configuration Space
  1714.     accesses. Unfortunately, there are many problems with this mechanism, and
  1715.     the latest kernel updates are more careful about enabling this support.
  1716.  
  1717.     The NVIDIA driver may be unable to reliably read/write the PCI
  1718.     Configuration Space of NVIDIA devices when the kernel is using the
  1719.     MMCONFIG method to access PCI Configuration Space, specifically when using
  1720.     multiple GPUs and multiple CPUs on 32-bit kernels.
  1721.  
  1722.     This access method can be identified by the presence of the string "PCI:
  1723.    Using MMCONFIG" in the 'dmesg' output on your system. This access method
  1724.     can be disabled via the "pci=nommconf" kernel parameter.
  1725.  
  1726. Notebooks
  1727.  
  1728.     If you are using a notebook see the "Known Notebook Issues" in Chapter 18.
  1729.  
  1730. FSAA
  1731.  
  1732.     When FSAA is enabled (the __GL_FSAA_MODE environment variable is set to a
  1733.     value that enables FSAA and a multisample visual is chosen), the rendering
  1734.     may be corrupted when resizing the window.
  1735.  
  1736. libGL DSO finalizer and pthreads
  1737.  
  1738.     When a multithreaded OpenGL application exits, it is possible for libGL's
  1739.    DSO finalizer (also known as the destructor, or "_fini") to be called
  1740.    while other threads are executing OpenGL code. The finalizer needs to free
  1741.    resources allocated by libGL. This can cause problems for threads that are
  1742.    still using these resources. Setting the environment variable
  1743.    "__GL_NO_DSO_FINALIZER" to "1" will work around this problem by forcing
  1744.    libGL's finalizer to leave its resources in place. These resources will
  1745.     still be reclaimed by the operating system when the process exits. Note
  1746.     that the finalizer is also executed as part of dlclose(3), so if you have
  1747.     an application that dlopens(3) and dlcloses(3) libGL repeatedly,
  1748.     "__GL_NO_DSO_FINALIZER" will cause libGL to leak resources until the
  1749.     process exits. Using this option can improve stability in some
  1750.     multithreaded applications, including Java3D applications.
  1751.  
  1752. XVideo and the Composite X extension
  1753.  
  1754.     XVideo will not work correctly when Composite is enabled unless using
  1755.     X.Org 7.1 or later. See Chapter 23.
  1756.  
  1757. This section describes problems that will not be fixed. Usually, the source of
  1758. the problem is beyond the control of NVIDIA. Following is the list of
  1759. problems:
  1760.  
  1761. Problems that Will Not Be Fixed
  1762.  
  1763. Gigabyte GA-6BX Motherboard
  1764.  
  1765.     This motherboard uses a LinFinity regulator on the 3.3 V rail that is only
  1766.     rated to 5 A -- less than the AGP specification, which requires 6 A. When
  1767.     diagnostics or applications are running, the temperature of the regulator
  1768.     rises, causing the voltage to the NVIDIA GPU to drop as low as 2.2 V.
  1769.     Under these circumstances, the regulator cannot supply the current on the
  1770.     3.3 V rail that the NVIDIA GPU requires.
  1771.  
  1772.     This problem does not occur when the graphics card has a switching
  1773.     regulator or when an external power supply is connected to the 3.3 V rail.
  1774.  
  1775. VIA KX133 and 694X Chip sets with AGP 2x
  1776.  
  1777.     On Athlon motherboards with the VIA KX133 or 694X chip set, such as the
  1778.     ASUS K7V motherboard, NVIDIA drivers default to AGP 2x mode to work around
  1779.     insufficient drive strength on one of the signals.
  1780.  
  1781. Irongate Chip sets with AGP 1x
  1782.  
  1783.     AGP 1x transfers are used on Athlon motherboards with the Irongate chipset
  1784.     to work around a problem with signal integrity.
  1785.  
  1786. ALi chipsets, ALi1541 and ALi1647
  1787.  
  1788.     On ALi1541 and ALi1647 chipsets, NVIDIA drivers disable AGP to work around
  1789.     timing issues and signal integrity issues. See Chapter 8 for more
  1790.     information on ALi chipsets.
  1791.  
  1792. NV-CONTROL versions 1.8 and 1.9
  1793.  
  1794.     Version 1.8 of the NV-CONTROL X Extension introduced target types for
  1795.     setting and querying attributes as well as receiving event notification on
  1796.     targets. Targets are objects like X Screens, GPUs and G-Sync devices.
  1797.     Previously, all attributes were described relative to an X Screen. These
  1798.     new bits of information (target type and target id) were packed in a
  1799.     non-compatible way in the protocol stream such that addressing X Screen 1
  1800.     or higher would generate an X protocol error when mixing NV-CONTROL client
  1801.     and server versions.
  1802.  
  1803.     This packing problem has been fixed in the NV-CONTROL 1.10 protocol,
  1804.     making it possible for the older (1.7 and prior) clients to communicate
  1805.     with NV-CONTROL 1.10 servers. Furthermore, the NV-CONTROL 1.10 client
  1806.     library has been updated to accommodate the target protocol packing bug
  1807.     when communicating with a 1.8 or 1.9 NV-CONTROL server. This means that
  1808.     the NV-CONTROL 1.10 client library should be able to communicate with any
  1809.     version of the NV-CONTROL server.
  1810.  
  1811.     NVIDIA recommends that NV-CONTROL client applications relink with version
  1812.     1.10 or later of the NV-CONTROL client library (libXNVCtrl.a, in the
  1813.     nvidia-settings-1.0.tar.gz tarball). The version of the client library can
  1814.     be determined by checking the NV_CONTROL_MAJOR and NV_CONTROL_MINOR
  1815.     definitions in the accompanying nv_control.h.
  1816.  
  1817.     The only web released NVIDIA Linux driver that is affected by this problem
  1818.     (i.e., the only driver to use either version 1.8 or 1.9 of the NV-CONTROL
  1819.     X extension) is 1.0-8756.
  1820.  
  1821. I/O APIC (SMP)
  1822.  
  1823.     If you are experiencing stability problems with a Linux SMP computer and
  1824.     seeing I/O APIC warning messages from the Linux kernel, system reliability
  1825.     may be greatly improved by setting the "noapic" kernel parameter.
  1826.  
  1827. Local APIC (UP)
  1828.  
  1829.     On some systems, setting the "Local APIC Support on Uniprocessors" kernel
  1830.     configuration option can have adverse effects on system stability and
  1831.     performance. If you are experiencing lockups with a Linux UP computer and
  1832.     have this option set, try disabling local APIC support.
  1833.  
  1834. nForce2 Chipsets and AGPGART
  1835.  
  1836.     Some of the earlier versions of agpgart to support the nForce2 chipset are
  1837.     known to contain bugs that result in system hangs. The suggested
  1838.     workaround is to use NVAGP or update to a newer kernel. Known problematic
  1839.     versions include all known Red Hat Enterprise Linux 3 kernels (through
  1840.     Update 7).
  1841.  
  1842.     If a broken agpgart is used on an nForce2 chipset, the NVIDIA driver will
  1843.     attempt to work around these agpgart bugs as best it can, by recovering
  1844.     from AGP errors and eventually disabling AGP.
  1845.  
  1846.     To configure NVAGP, see Chapter 12.
  1847.  
  1848.  
  1849. ______________________________________________________________________________
  1850.  
  1851. Chapter 10. Allocating DMA Buffers on 64-bit Platforms
  1852. ______________________________________________________________________________
  1853.  
  1854. NVIDIA GPUs have limits on how much physical memory they can address. This
  1855. directly impacts DMA buffers, as a DMA buffer allocated in physical memory
  1856. that is unaddressable by the NVIDIA GPU cannot be used (or may be truncated,
  1857. resulting in bad memory accesses).
  1858.  
  1859. All pre-PCI Express GPUs and non-Native PCI Express GPUs (often known as
  1860. bridged GPUs) are limited to 32 bits of physical address space, which
  1861. corresponds to 4 GB of memory. On a system with greater than 4 GB of memory,
  1862. allocating usable DMA buffers can be a problem. Native PCI Express GPUs are
  1863. capable of addressing greater than 32 bits of physical address space and do
  1864. not experience the same problems.
  1865.  
  1866. Newer kernels provide a simple way to allocate memory that is guaranteed to
  1867. reside within the 32 bit physical address space. Kernel 2.6.15 provides this
  1868. functionality with the __GFP_DMA32 interface. Kernels earlier than this
  1869. version provide a software I/O TLB on Intel's EM64T and IOMMU support on AMD's
  1870. AMD64 platform.
  1871.  
  1872. Unfortunately, some problems exist with both interfaces. Early implementations
  1873. of the Linux SWIOTLB set aside a very small amount of memory for its memory
  1874. pool (only 4 MB). Also, when this memory pool is exhausted, some SWIOTLB
  1875. implementations forcibly panic the kernel. This is also true for some
  1876. implementations of the IOMMU interface.
  1877.  
  1878. Kernel panics and related stability problems on Intel's EM64T platform can be
  1879. avoided by increasing the size of the SWIOTLB pool with the 'swiotlb' kernel
  1880. parameter. This kernel parameter expects the desired size as a number of 4 KB
  1881. pages. NVIDIA suggests raising the size of the SWIOTLB pool to 64 MB; this is
  1882. accomplished by passing 'swiotlb=16384' to the kernel. note that newer Linux
  1883. 2.6 kernels already default to a 64 MB SWIOTLB pool, see below for more
  1884. information.
  1885.  
  1886. Starting with Linux 2.6.9, the default size of the SWIOTLB is 64 MB and
  1887. overflow handling is improved. Both of these changes are expected to greatly
  1888. improve stability on Intel's EM64T platform. If you consider upgrading your
  1889. Linux kernel to benefit from these improvements, NVIDIA recommends that you
  1890. upgrade to Linux 2.6.11 or a more recent Linux kernel. See the previous
  1891. section for additional information.
  1892.  
  1893. On AMD's AMD64 platform, the size of the IOMMU can be configured in the system
  1894. BIOS or, if no IOMMU BIOS option is available, using the 'iommu=memaper'
  1895. kernel parameter. This kernel parameter expects an order and instructs the
  1896. Linux kernel to create an IOMMU of size 32 MB^order overlapping physical
  1897. memory. If the system's default IOMMU is smaller than 64 MB, the Linux kernel
  1898. automatically replaces it with a 64 MB IOMMU.
  1899.  
  1900. To reduce the risk of stability problems as a result of IOMMU or SWIOTLB
  1901. exhaustion on the X86-64 platform, the NVIDIA Linux driver internally limits
  1902. its use of these interfaces. By default, the driver will not use more than 60
  1903. MB of IOMMU/SWIOTLB space, leaving 4 MB for the rest of the system (assuming a
  1904. 64 MB IOMMU/SWIOTLB).
  1905.  
  1906. This limit can be adjusted with the 'NVreg_RemapLimit' NVIDIA kernel module
  1907. option. Specifically, if the IOMMU/SWIOTLB is larger than 64 MB, the limit can
  1908. be adjusted to take advantage of the additional space. The 'NVreg_RemapLimit'
  1909. option expects the size argument in bytes.
  1910.  
  1911. NVIDIA recommends leaving 4 MB available for the rest of the system when
  1912. changing the limit. For example, if the internal limit is to be relaxed to
  1913. account for a 128 MB IOMMU/SWIOTLB, the recommended remap limit is 124 MB.
  1914. This remap limit can be specified by passing 'NVreg_RemapLimit=0x7c00000' to
  1915. the NVIDIA kernel module.
  1916.  
  1917. Also see the 'The X86-64 platform (AMD64/EM64T) and 2.6 kernels' section in
  1918. Chapter 9.
  1919.  
  1920. ______________________________________________________________________________
  1921.  
  1922. Chapter 11. Specifying OpenGL Environment Variable Settings
  1923. ______________________________________________________________________________
  1924.  
  1925.  
  1926. 11A. FULL SCENE ANTIALIASING
  1927.  
  1928. Antialiasing is a technique used to smooth the edges of objects in a scene to
  1929. reduce the jagged "stairstep" effect that sometimes appears. Full-scene
  1930. antialiasing is supported on GeForce or newer hardware. By setting the
  1931. appropriate environment variable, you can enable full-scene antialiasing in
  1932. any OpenGL application on these GPUs.
  1933.  
  1934. Several antialiasing methods are available and you can select between them by
  1935. setting the __GL_FSAA_MODE environment variable appropriately. Note that
  1936. increasing the number of samples taken during FSAA rendering may decrease
  1937. performance.
  1938.  
  1939. The following tables describe the possible values for __GL_FSAA_MODE and the
  1940. effects that they have on various NVIDIA GPUs.
  1941.  
  1942.  
  1943.  
  1944.     __GL_FSAA_MODE     GeForce, GeForce2, Quadro, and Quadro2 Pro
  1945.     ---------------    ------------------------------------------------------
  1946.     0                  FSAA disabled
  1947.     1                  FSAA disabled
  1948.     2                  FSAA disabled
  1949.     3                  1.5 x 1.5 Supersampling
  1950.     4                  2 x 2 Supersampling
  1951.     5                  FSAA disabled
  1952.     6                  FSAA disabled
  1953.     7                  FSAA disabled
  1954.  
  1955.  
  1956.  
  1957.  
  1958.     __GL_FSAA_MODE     GeForce4 MX, GeForce4 4xx Go, Quadro4 380,550,580
  1959.                        XGL, and Quadro4 NVS
  1960.     ---------------    ------------------------------------------------------
  1961.     0                  FSAA disabled
  1962.     1                  2x Bilinear Multisampling
  1963.     2                  2x Quincunx Multisampling
  1964.     3                  FSAA disabled
  1965.     4                  2 x 2 Supersampling
  1966.     5                  FSAA disabled
  1967.     6                  FSAA disabled
  1968.     7                  FSAA disabled
  1969.  
  1970.  
  1971.  
  1972.  
  1973.     __GL_FSAA_MODE     GeForce3, Quadro DCC, GeForce4 Ti, GeForce4 4200 Go,
  1974.                        and Quadro4 700,750,780,900,980 XGL
  1975.     ---------------    ------------------------------------------------------
  1976.     0                  FSAA disabled
  1977.     1                  2x Bilinear Multisampling
  1978.     2                  2x Quincunx Multisampling
  1979.     3                  FSAA disabled
  1980.     4                  4x Bilinear Multisampling
  1981.     5                  4x Gaussian Multisampling
  1982.     6                  2x Bilinear Multisampling by 4x Supersampling
  1983.     7                  FSAA disabled
  1984.  
  1985.  
  1986.  
  1987.  
  1988.     __GL_FSAA_MODE     GeForce FX, GeForce 6xxx, GeForce 7xxx, Quadro FX
  1989.     ---------------    ------------------------------------------------------
  1990.     0                  FSAA disabled
  1991.     1                  2x Bilinear Multisampling
  1992.     2                  2x Quincunx Multisampling
  1993.     3                  FSAA disabled
  1994.     4                  4x Bilinear Multisampling
  1995.     5                  4x Gaussian Multisampling
  1996.     6                  2x Bilinear Multisampling by 4x Supersampling
  1997.     7                  4x Bilinear Multisampling by 4x Supersampling
  1998.     8                  4x Bilinear Multisampling by 2x Supersampling
  1999.                        (available on GeForce FX and later GPUs; not
  2000.                        available on Quadro GPUs)
  2001.  
  2002.  
  2003.  
  2004.  
  2005.     __GL_FSAA_MODE     GeForce 8xxx, G8xGL
  2006.     ---------------    ------------------------------------------------------
  2007.     0                  FSAA disabled
  2008.     1                  2x Bilinear Multisampling
  2009.     2                  FSAA disabled
  2010.     3                  FSAA disabled
  2011.     4                  4x Bilinear Multisampling
  2012.     5                  FSAA disabled
  2013.     6                  FSAA disabled
  2014.     7                  4x Bilinear Multisampling by 4x Supersampling
  2015.     8                  FSAA disabled
  2016.     9                  8x Bilinear Multisampling
  2017.     10                 8x
  2018.     11                 16x
  2019.     12                 16xQ
  2020.     13                 8x Bilinear Multisampling by 4x Supersampling
  2021.  
  2022.  
  2023.  
  2024. 11B. ANISOTROPIC TEXTURE FILTERING
  2025.  
  2026. Automatic anisotropic texture filtering can be enabled by setting the
  2027. environment variable __GL_LOG_MAX_ANISO. The possible values are:
  2028.  
  2029.     __GL_LOG_MAX_ANISO                    Filtering Type
  2030.     ----------------------------------    ----------------------------------
  2031.     0                                     No anisotropic filtering
  2032.     1                                     2x anisotropic filtering
  2033.     2                                     4x anisotropic filtering
  2034.     3                                     8x anisotropic filtering
  2035.     4                                     16x anisotropic filtering
  2036.  
  2037. 4x and greater are only available on GeForce3 or newer GPUs; 16x is only
  2038. available on GeForce 6800 or newer GPUs.
  2039.  
  2040.  
  2041. 11C. VBLANK SYNCING
  2042.  
  2043. Setting the environment variable __GL_SYNC_TO_VBLANK to a non-zero value will
  2044. force glXSwapBuffers to sync to your monitor's vertical refresh (perform a
  2045. swap only during the vertical blanking period).
  2046.  
  2047. When using __GL_SYNC_TO_VBLANK with TwinView, OpenGL can only sync to one of
  2048. the display devices; this may cause tearing corruption on the display device
  2049. to which OpenGL is not syncing. You can use the environment variable
  2050. __GL_SYNC_DISPLAY_DEVICE to specify to which display device OpenGL should
  2051. sync. You should set this environment variable to the name of a display
  2052. device; for example "CRT-1". Look for the line "Connected display device(s):"
  2053. in your X log file for a list of the display devices present and their names.
  2054. You may also find it useful to review Chapter 13 "Configuring Twinview" and
  2055. the section on Ensuring Identical Mode Timings in Chapter 19.
  2056.  
  2057.  
  2058. 11D. DISABLING CPU-SPECIFIC FEATURES
  2059.  
  2060. Setting the environment variable __GL_FORCE_GENERIC_CPU to a non-zero value
  2061. will inhibit the use of CPU-specific features such as MMX, SSE, or 3DNOW!. Use
  2062. of this option may result in performance loss.
  2063.  
  2064.  
  2065. 11E. CONTROLLING THE SORTING OF OPENGL FBCONFIGS
  2066.  
  2067. The NVIDIA GLX implementation sorts FBConfigs returned by glXChooseFBConfig()
  2068. as described in the GLX specification. To disable this behavior set
  2069. __GL_SORT_FBCONFIGS to 0 (zero), then FBConfigs will be returned in the order
  2070. they were received from the X server. To examine the order in which FBConfigs
  2071. are returned by the X server run:
  2072.  
  2073. nvidia-settings --glxinfo
  2074.  
  2075. This option may be be useful to work around problems in which applications
  2076. pick an unexpected FBConfig.
  2077.  
  2078.  
  2079. 11F. OPENGL YIELD BEHAVIOR
  2080.  
  2081. There are several cases where the NVIDIA OpenGL driver needs to wait for
  2082. external state to change before continuing. To avoid consuming too much CPU
  2083. time in these cases, the driver will sometimes yield so the kernel can
  2084. schedule other processes to run while the driver waits. For example, when
  2085. waiting for free space in a command buffer, if the free space has not become
  2086. available after a certain number of iterations, the driver will yield before
  2087. it continues to loop.
  2088.  
  2089. By default, the driver calls sched_yield() to do this. However, this can cause
  2090. the calling process to be scheduled out for a relatively long period of time
  2091. if there are other, same-priority processes competing for time on the CPU. One
  2092. example of this is when an OpenGL-based composite manager is moving and
  2093. repainting a window and the X server is trying to update the window as it
  2094. moves, which are both CPU-intensive operations.
  2095.  
  2096. You can use the __GL_YIELD environment variable to work around these
  2097. scheduling problems. This variable allows the user to specify what the driver
  2098. should do when it wants to yield. The possible values are:
  2099.  
  2100.    __GL_YIELD         Behavior
  2101.    ---------------    ------------------------------------------------------
  2102.    <unset>            By default, OpenGL will call sched_yield() to yield.
  2103.    "NOTHING"          OpenGL will never yield.
  2104.    "USLEEP"           OpenGL will call usleep(0) to yield.
  2105.  
  2106.  
  2107.  
  2108. 11G. CONTROLLING WHICH OPENGL FBCONFIGS ARE AVAILABLE
  2109.  
  2110. The NVIDIA GLX implementation will hide FBConfigs that are associated with a
  2111. 32-bit ARGB visual when the XLIB_SKIP_ARGB_VISUALS environment variable is
  2112. defined. This matches the behavior of libX11, which will hide those visuals
  2113. from XGetVisualInfo and XMatchVisualInfo. This environment variable is useful
  2114. when applications are confused by the presence of these FBConfigs.
  2115.  
  2116. ______________________________________________________________________________
  2117.  
  2118. Chapter 12. Configuring AGP
  2119. ______________________________________________________________________________
  2120.  
  2121. There are several choices for configuring the NVIDIA kernel module's use of
  2122. AGP on Linux. You can choose to either use the NVIDIA builtin AGP driver
  2123. (NvAGP), or the AGP driver that comes with the Linux kernel (AGPGART). This is
  2124. controlled through the "NvAGP" option in your X config file:
  2125.  
  2126.     Option "NvAGP" "0"  ... disables AGP support
  2127.     Option "NvAGP" "1"  ... use NvAGP, if possible
  2128.     Option "NvAGP" "2"  ... use AGPGART, if possible
  2129.     Option "NvAGP" "3"  ... try AGPGART; if that fails, try NvAGP
  2130.  
  2131. The default is 3 (the default was 1 until after 1.0-1251).
  2132.  
  2133. You should use the AGP driver that works best with your AGP chipset. If you
  2134. are experiencing problems with stability, you may want to start by disabling
  2135. AGP and seeing if that solves the problems. Then you can experiment with the
  2136. AGP driver configuration.
  2137.  
  2138. You can query the current AGP status at any time via the '/proc' filesystem
  2139. interface (see Chapter 21).
  2140.  
  2141. To use the Linux 2.4 AGPGART driver, you will need to compile it with your
  2142. kernel and either statically link it in, or build it as a module and load it.
  2143. To use the Linux 2.6 AGPGART driver, both the AGPGART frontend module,
  2144. 'apggart.ko', and the backend module for your AGP chipset ('nvidia-agp.ko',
  2145. 'intel-agp.ko', 'via-agp.ko', ...) need to be statically linked into the
  2146. kernel, or built as modules and loaded.
  2147.  
  2148. NVIDIA builtin AGP support is unavailable if an AGPGART backend driver is
  2149. loaded into the kernel. On Linux 2.4, it is recommended that you compile
  2150. AGPGART as a module and make sure that it is not loaded when trying to use the
  2151. NVIDIA AGP driver. On Linux 2.6, the 'agpgart.ko' frontend module will always
  2152. be loaded, as it is used by the NVIDIA kernel module to determine if an
  2153. AGPGART backend module is loaded. When the NVIDIA AGP driver is to be used on
  2154. a Linux 2.6 system, it is recommended that you make sure the AGPGART backend
  2155. drivers are built as modules and that they are not loaded.
  2156.  
  2157. Also note that changing AGP drivers generally requires a reboot before the
  2158. changes actually take effect.
  2159.  
  2160. If you are using a recent Linux 2.6 kernel that has the Linux AGPGART driver
  2161. statically linked in (some distribution kernels do), you can pass the
  2162.  
  2163.     agp=off
  2164.  
  2165. parameter to the kernel (via LILO or GRUB, for example) to disable AGPGART
  2166. support. As of Linux 2.6.11, most AGPGART backend drivers should respect this
  2167. parameter.
  2168.  
  2169. The following AGP chipsets are supported by the NVIDIA AGP driver; for all
  2170. other chipsets it is recommended that you use the AGPGART module.
  2171.  
  2172.     Supported AGP Chipsets
  2173.     ----------------------------------------------------------------------
  2174.     Intel 440LX
  2175.     Intel 440BX
  2176.     Intel 440GX
  2177.     Intel 815 ("Solano")
  2178.     Intel 820 ("Camino")
  2179.     Intel 830M
  2180.     Intel 840 ("Carmel")
  2181.     Intel 845 ("Brookdale")
  2182.     Intel 845G
  2183.     Intel 850 ("Tehama")
  2184.     Intel 855 ("Odem")
  2185.     Intel 860 ("Colusa")
  2186.     Intel 865G ("Springdale")
  2187.     Intel 875P ("Canterwood")
  2188.     Intel E7205 ("Granite Bay")
  2189.     Intel E7505 ("Placer")
  2190.     AMD 751 ("Irongate")
  2191.     AMD 761 ("IGD4")
  2192.     AMD 762 ("IGD4 MP")
  2193.     AMD 8151 ("Lokar")
  2194.     VIA 8371
  2195.     VIA 82C694X
  2196.     VIA KT133
  2197.     VIA KT266
  2198.     VIA KT400
  2199.     VIA P4M266
  2200.     VIA P4M266A
  2201.     VIA P4X400
  2202.     VIA K8T800
  2203.     VIA K8N800
  2204.     VIA PT880
  2205.     VIA KT880
  2206.     RCC CNB20LE
  2207.     RCC 6585HE
  2208.     Micron SAMDDR ("Samurai")
  2209.     Micron SCIDDR ("Scimitar")
  2210.     NVIDIA nForce
  2211.     NVIDIA nForce2
  2212.     NVIDIA nForce3
  2213.     ALi 1621
  2214.     ALi 1631
  2215.     ALi 1647
  2216.     ALi 1651
  2217.     ALi 1671
  2218.     SiS 630
  2219.     SiS 633
  2220.     SiS 635
  2221.     SiS 645
  2222.     SiS 646
  2223.     SiS 648
  2224.     SiS 648FX
  2225.     SiS 650
  2226.     SiS 651
  2227.     SiS 655
  2228.     SiS 655FX
  2229.     SiS 661
  2230.     SiS 730
  2231.     SiS 733
  2232.     SiS 735
  2233.     SiS 745
  2234.     SiS 755
  2235.     ATI RS200M
  2236.  
  2237.  
  2238. If you are experiencing AGP stability problems, you should be aware of the
  2239. following:
  2240.  
  2241. Additional AGP Information
  2242.  
  2243. AGP Rate
  2244.  
  2245.     You may want to decrease the AGP rate setting if you are seeing lockups
  2246.     with the value you are currently using. You can do so by extracting the
  2247.     '.run' file:
  2248.    
  2249.         # sh NVIDIA-Linux-x86-173.14.39-pkg1.run --extract-only
  2250.         # cd NVIDIA-Linux-x86-173.14.39-pkg1/usr/src/nv/
  2251.    
  2252.     Then edit nv-reg.h, and make the following changes:
  2253.    
  2254.         - NV_DEFINE_REG_ENTRY(__NV_REQ_AGP_RATE, 15);
  2255.         + NV_DEFINE_REG_ENTRY(__NV_REQ_AGP_RATE, 4);   /* force AGP Rate to 4x
  2256.     */
  2257.    
  2258.     or
  2259.    
  2260.         + NV_DEFINE_REG_ENTRY(__NV_REQ_AGP_RATE, 2);   /* force AGP Rate to 2x
  2261.     */
  2262.    
  2263.     or
  2264.    
  2265.         + NV_DEFINE_REG_ENTRY(__NV_REQ_AGP_RATE, 1);   /* force AGP Rate to 1x
  2266.     */
  2267.    
  2268.     Then recompile and load the new kernel module. To do this, run
  2269.     'nvidia-installer' with the -n command line option:
  2270.    
  2271.         # cd ../../..; ./nvidia-installer -n
  2272.    
  2273.    
  2274. AGP drive strength BIOS setting (Via-based motherboards)
  2275.  
  2276.     Many Via-based motherboards allow adjusting the AGP drive strength in the
  2277.     system BIOS. The setting of this option largely affects system stability,
  2278.     the range between 0xEA and 0xEE seems to work best for NVIDIA hardware.
  2279.     Setting either nibble to 0xF generally results in severe stability
  2280.     problems.
  2281.  
  2282.     If you decide to experiment with this, you need to be aware of the fact
  2283.     that you are doing so at your own risk and that you may render your system
  2284.     unbootable with improper settings until you reset the setting to a working
  2285.     value (w/ a PCI graphics card or by resetting the BIOS to its default
  2286.     values).
  2287.  
  2288. System BIOS version
  2289.  
  2290.     Make sure you have the latest system BIOS provided by the motherboard
  2291.     manufacturer.
  2292.  
  2293.     On ALi1541 and ALi1647 chipsets, NVIDIA drivers disable AGP to work around
  2294.     timing and signal integrity problems. You can force AGP to be enabled on
  2295.     these chipsets by setting NVreg_EnableALiAGP to 1. Note that this may
  2296.     cause the system to become unstable.
  2297.  
  2298.     Early system BIOS revisions for the ASUS A7V8X-X KT400 motherboard
  2299.     misconfigure the chipset when an AGP 2.x graphics card is installed; if X
  2300.     hangs on your ASUS KT400 system with either Linux AGPGART or NvAGP enabled
  2301.     and the installed graphics card is not an AGP 8x device, make sure that
  2302.     you have the latest system BIOS installed.
  2303.  
  2304.  
  2305. ______________________________________________________________________________
  2306.  
  2307. Chapter 13. Configuring TwinView
  2308. ______________________________________________________________________________
  2309.  
  2310. TwinView is a mode of operation where two display devices (digital flat
  2311. panels, CRTs, and TVs) can display the contents of a single X screen in any
  2312. arbitrary configuration. This method of multiple monitor use has several
  2313. distinct advantages over other techniques (such as Xinerama):
  2314.  
  2315.  
  2316.    o A single X screen is used. The NVIDIA driver conceals all information
  2317.      about multiple display devices from the X server; as far as X is
  2318.      concerned, there is only one screen.
  2319.  
  2320.    o Both display devices share one frame buffer. Thus, all the functionality
  2321.      present on a single display (e.g., accelerated OpenGL) is available with
  2322.      TwinView.
  2323.  
  2324.    o No additional overhead is needed to emulate having a single desktop.
  2325.  
  2326.  
  2327. If you are interested in using each display device as a separate X screen, see
  2328. Chapter 15.
  2329.  
  2330.  
  2331. 13A. X CONFIG TWINVIEW OPTIONS
  2332.  
  2333. To enable TwinView, you must specify the following option in the Device
  2334. section of your X Config file:
  2335.  
  2336.     Option "TwinView"
  2337.  
  2338. You may also use any of the following options, though they are not required:
  2339.  
  2340.     Option "MetaModes"                "<list of MetaModes>"
  2341.  
  2342.     Option "SecondMonitorHorizSync"   "<hsync range(s)>"
  2343.     Option "SecondMonitorVertRefresh" "<vrefresh range(s)>"
  2344.  
  2345.     Option "HorizSync"                "<hsync range(s)>"
  2346.     Option "VertRefresh"              "<vrefresh range(s)>"
  2347.  
  2348.     Option "TwinViewOrientation"      "<relationship of head 1 to head 0>"
  2349.     Option "ConnectedMonitor"         "<list of connected display devices>"
  2350.  
  2351. See detailed descriptions of each option below.
  2352.  
  2353. Alternatively, you can enable TwinView by running
  2354.  
  2355.     nvidia-xconfig --twinview
  2356.  
  2357. and restarting your X server. Or, you can configure TwinView dynamically in
  2358. the "Display Configuration" page in nvidia-settings.
  2359.  
  2360.  
  2361. 13B. DETAILED DESCRIPTION OF OPTIONS
  2362.  
  2363.  
  2364. TwinView
  2365.  
  2366.     This option is required to enable TwinView; without it, all other TwinView
  2367.     related options are ignored.
  2368.  
  2369. SecondMonitorHorizSync
  2370. SecondMonitorVertRefresh
  2371.  
  2372.     You specify the constraints of the second monitor through these options.
  2373.     The values given should follow the same convention as the "HorizSync" and
  2374.     "VertRefresh" entries in the Monitor section. As the XF86Config man page
  2375.     explains it: the ranges may be a comma separated list of distinct values
  2376.     and/or ranges of values, where a range is given by two distinct values
  2377.     separated by a dash. The HorizSync is given in kHz, and the VertRefresh is
  2378.     given in Hz.
  2379.  
  2380.     These options are normally not needed: by default, the NVIDIA X driver
  2381.     retrieves the valid frequency ranges from the display device's EDID (see
  2382.    Appendix B for a description of the "UseEdidFreqs" option). The
  2383.    SecondMonitor options will override any frequency ranges retrieved from
  2384.    the EDID.
  2385.  
  2386. HorizSync
  2387. VertRefresh
  2388.  
  2389.    Which display device is "first" and which is "second" is often unclear.
  2390.    For this reason, you may use these options instead of the SecondMonitor
  2391.    versions. With these options, you can specify a semicolon-separated list
  2392.    of frequency ranges, each optionally prepended with a display device name.
  2393.    For example:
  2394.    
  2395.        Option "HorizSync"   "CRT-0: 50-110;  DFP-0: 40-70"
  2396.        Option "VertRefresh" "CRT-0: 60-120;  DFP-0: 60"
  2397.    
  2398.    See Appendix C on Display Device Names for more information.
  2399.  
  2400.    These options are normally not needed: by default, the NVIDIA X driver
  2401.    retrieves the valid frequency ranges from the display device's EDID (see
  2402.     Appendix B for a description of the "UseEdidFreqs" option). The
  2403.     "HorizSync" and "VertRefresh" options override any frequency ranges
  2404.     retrieved from the EDID or any frequency ranges specified with the
  2405.     "SecondMonitorHorizSync" and "SecondMonitorVertRefresh" options.
  2406.  
  2407. MetaModes
  2408.  
  2409.     MetaModes are "containers" that store information about what mode should
  2410.     be used on each display device at any given time. Even if only one display
  2411.     device is actively in use, the NVIDIA X driver always uses a MetaMode to
  2412.     encapsulate the mode information per display device, so that it can
  2413.     support dynamically enabling TwinView.
  2414.  
  2415.     Multiple MetaModes list the combinations of modes and the sequence in
  2416.     which they should be used. When the NVIDIA driver tells X what modes are
  2417.     available, it is really the minimal bounding box of the MetaMode that is
  2418.     communicated to X, while the "per display device" mode is kept internal to
  2419.     the NVIDIA driver. In MetaMode syntax, modes within a MetaMode are comma
  2420.     separated, and multiple MetaModes are separated by semicolons. For
  2421.     example:
  2422.    
  2423.         "<mode name 0>, <mode name 1>; <mode name 2>, <mode name 3>"
  2424.    
  2425.     Where <mode name 0> is the name of the mode to be used on display device 0
  2426.     concurrently with <mode name 1> used on display device 1. A mode switch
  2427.     will then cause <mode name 2> to be used on display device 0 and <mode
  2428.     name 3> to be used on display device 1. Here is an example MetaMode:
  2429.    
  2430.         Option "MetaModes" "1280x1024,1280x1024; 1024x768,1024x768"
  2431.    
  2432.     If you want a display device to not be active for a certain MetaMode, you
  2433.     can use the mode name "NULL", or simply omit the mode name entirely:
  2434.    
  2435.         "1600x1200, NULL; NULL, 1024x768"
  2436.    
  2437.     or
  2438.    
  2439.         "1600x1200; , 1024x768"
  2440.    
  2441.     Optionally, mode names can be followed by offset information to control
  2442.     the positioning of the display devices within the virtual screen space;
  2443.     e.g.,
  2444.    
  2445.         "1600x1200 +0+0, 1024x768 +1600+0; ..."
  2446.    
  2447.     Offset descriptions follow the conventions used in the X "-geometry"
  2448.     command line option; i.e., both positive and negative offsets are valid,
  2449.     though negative offsets are only allowed when a virtual screen size is
  2450.     explicitly given in the X config file.
  2451.  
  2452.     When no offsets are given for a MetaMode, the offsets will be computed
  2453.     following the value of the TwinViewOrientation option (see below). Note
  2454.     that if offsets are given for any one of the modes in a single MetaMode,
  2455.     then offsets will be expected for all modes within that single MetaMode;
  2456.     in such a case offsets will be assumed to be +0+0 when not given.
  2457.  
  2458.     When not explicitly given, the virtual screen size will be computed as the
  2459.     the bounding box of all MetaMode bounding boxes. MetaModes with a bounding
  2460.     box larger than an explicitly given virtual screen size will be discarded.
  2461.  
  2462.     A MetaMode string can be further modified with a "Panning Domain"
  2463.     specification; e.g.,
  2464.    
  2465.         "1024x768 @1600x1200, 800x600 @1600x1200"
  2466.    
  2467.     A panning domain is the area in which a display device's viewport will be
  2468.    panned to follow the mouse. Panning actually happens on two levels with
  2469.    TwinView: first, an individual display device's viewport will be panned
  2470.     within its panning domain, as long as the viewport is contained by the
  2471.     bounding box of the MetaMode. Once the mouse leaves the bounding box of
  2472.     the MetaMode, the entire MetaMode (i.e., all display devices) will be
  2473.     panned to follow the mouse within the virtual screen. Note that individual
  2474.     display devices' panning domains default to being clamped to the position
  2475.    of the display devices' viewports, thus the default behavior is just that
  2476.     viewports remain "locked" together and only perform the second type of
  2477.     panning.
  2478.  
  2479.     The most beneficial use of panning domains is probably to eliminate dead
  2480.     areas -- regions of the virtual screen that are inaccessible due to
  2481.     display devices with different resolutions. For example:
  2482.    
  2483.         "1600x1200, 1024x768"
  2484.    
  2485.     produces an inaccessible region below the 1024x768 display. Specifying a
  2486.     panning domain for the second display device:
  2487.    
  2488.         "1600x1200, 1024x768 @1024x1200"
  2489.    
  2490.     provides access to that dead area by allowing you to pan the 1024x768
  2491.     viewport up and down in the 1024x1200 panning domain.
  2492.  
  2493.     Offsets can be used in conjunction with panning domains to position the
  2494.     panning domains in the virtual screen space (note that the offset
  2495.     describes the panning domain, and only affects the viewport in that the
  2496.     viewport must be contained within the panning domain). For example, the
  2497.     following describes two modes, each with a panning domain width of 1900
  2498.     pixels, and the second display is positioned below the first:
  2499.    
  2500.         "1600x1200 @1900x1200 +0+0, 1024x768 @1900x768 +0+1200"
  2501.    
  2502.     Because it is often unclear which mode within a MetaMode will be used on
  2503.     each display device, mode descriptions within a MetaMode can be prepended
  2504.     with a display device name. For example:
  2505.    
  2506.         "CRT-0: 1600x1200,  DFP-0: 1024x768"
  2507.    
  2508.     If no MetaMode string is specified, then the X driver uses the modes
  2509.     listed in the relevant "Display" subsection, attempting to place matching
  2510.     modes on each display device.
  2511.  
  2512. TwinViewOrientation
  2513.  
  2514.     This option controls the positioning of the second display device relative
  2515.     to the first within the virtual X screen, when offsets are not explicitly
  2516.     given in the MetaModes. The possible values are:
  2517.    
  2518.         "RightOf"  (the default)
  2519.         "LeftOf"
  2520.         "Above"
  2521.         "Below"
  2522.         "Clone"
  2523.    
  2524.     When "Clone" is specified, both display devices will be assigned an offset
  2525.     of 0,0.
  2526.  
  2527.     Because it is often unclear which display device is "first" and which is
  2528.     "second", TwinViewOrientation can be confusing. You can further clarify
  2529.     the TwinViewOrientation with display device names to indicate which
  2530.     display device is positioned relative to which display device. For
  2531.     example:
  2532.    
  2533.         "CRT-0 LeftOf DFP-0"
  2534.    
  2535.    
  2536. ConnectedMonitor
  2537.  
  2538.     With this option you can override what the NVIDIA kernel module detects is
  2539.     connected to your graphics card. This may be useful, for example, if any
  2540.     of your display devices do not support detection using Display Data
  2541.     Channel (DDC) protocols. Valid values are a comma-separated list of
  2542.     display device names; for example:
  2543.    
  2544.         "CRT-0, CRT-1"
  2545.         "CRT"
  2546.         "CRT-1, DFP-0"
  2547.    
  2548.     WARNING: this option overrides what display devices are detected by the
  2549.     NVIDIA kernel module, and is very seldom needed. You really only need this
  2550.     if a display device is not detected, either because it does not provide
  2551.     DDC information, or because it is on the other side of a KVM
  2552.     (Keyboard-Video-Mouse) switch. In most other cases, it is best not to
  2553.     specify this option.
  2554.  
  2555.  
  2556. Just as in all X config entries, spaces are ignored and all entries are case
  2557. insensitive.
  2558.  
  2559.  
  2560. 13C. DYNAMIC TWINVIEW
  2561.  
  2562. Using the NV-CONTROL X extension, the display devices in use by an X screen,
  2563. the mode pool for each display device, and the MetaModes for each X screen can
  2564. be dynamically manipulated. The "Display Configuration" page in
  2565. nvidia-settings uses this functionality to modify the MetaMode list and then
  2566. uses XRandR to switch between MetaModes. This gives the ability to dynamically
  2567. configure TwinView.
  2568.  
  2569. The details of how this works are documented in the nv-control-dpy.c sample
  2570. NV-CONTROL client in the nvidia-settings source tarball.
  2571.  
  2572. Because the NVIDIA X driver can now transition into and out of TwinView
  2573. dynamically, MetaModes are always used internally by the NVIDIA X driver,
  2574. regardless of how many display devices are currently in use by the X screen
  2575. and regardless of whether the TwinView X configuration option was specified.
  2576.  
  2577. One implication of this implementation is that each MetaMode must be uniquely
  2578. identifiable to the XRandR X extension. Unfortunately, two MetaModes with the
  2579. same bounding box will look the same to XRandR. For example, two MetaModes
  2580. with different orientations:
  2581.  
  2582.     "CRT: 1600x1200 +0+0, DFP: 1600x1200 +1600+0"
  2583.     "CRT: 1600x1200 +1600+0, DFP: 1600x1200 +0+0"
  2584.  
  2585. will look identical to the XRandR or XF86VidMode X extensions, because they
  2586. have the same total size (3200x1200), and nvidia-settings would not be able to
  2587. use XRandR to switch between these MetaModes. To work around this limitation,
  2588. the NVIDIA X driver "lies" about the refresh rate of each MetaMode, using the
  2589. refresh rate of the MetaMode as a unique identifier.
  2590.  
  2591. The XRandR extension is currently being redesigned by the X.Org community, so
  2592. the refresh rate workaround may be removed at some point in the future. This
  2593. workaround can also be disabled by setting the "DynamicTwinView" X
  2594. configuration option to FALSE, which will disable NV-CONTROL support for
  2595. manipulating MetaModes, but will cause the XRandR and XF86VidMode visible
  2596. refresh rate to be accurate.
  2597.  
  2598.  
  2599. FREQUENTLY ASKED TWINVIEW QUESTIONS
  2600.  
  2601. Q. Nothing gets displayed on my second monitor; what is wrong?
  2602.  
  2603. A. Monitors that do not support monitor detection using Display Data Channel
  2604.    (DDC) protocols (this includes most older monitors) are not detectable by
  2605.    your NVIDIA card. You need to explicitly tell the NVIDIA X driver what you
  2606.    have connected using the "ConnectedMonitor" option; e.g.,
  2607.    
  2608.        Option "ConnectedMonitor" "CRT, CRT"
  2609.    
  2610.    
  2611.  
  2612. Q. Will window managers be able to appropriately place windows (e.g., avoiding
  2613.    placing windows across both display devices, or in inaccessible regions of
  2614.    the virtual desktop)?
  2615.  
  2616. A. Yes. The NVIDIA X driver provides a Xinerama extension that X clients (such
  2617.    as window managers) can use to discover the current TwinView configuration.
  2618.    Note that the Xinerama protocol provides no way to notify clients when a
  2619.    configuration change occurs, so if you modeswitch to a different MetaMode,
  2620.    your window manager will still think you have the previous configuration.
  2621.    Using the Xinerama extension, in conjunction with the XF86VidMode extension
  2622.    to get modeswitch events, window managers should be able to determine the
  2623.    TwinView configuration at any given time.
  2624.  
  2625.    Unfortunately, the data provided by XineramaQueryScreens() appears to
  2626.    confuse some window managers; to work around such broken window mangers,
  2627.    you can disable communication of the TwinView screen layout with the
  2628.    "NoTwinViewXineramaInfo" X config Option (see Appendix B for details).
  2629.  
  2630.    The order that display devices are reported in via the TwinView Xinerama
  2631.    information can be configured with the TwinViewXineramaInfoOrder X
  2632.    configuration option.
  2633.  
  2634.    Be aware that the NVIDIA driver cannot provide the Xinerama extension if
  2635.    the X server's own Xinerama extension is being used. Explicitly specifying
  2636.   Xinerama in the X config file or on the X server commandline will prohibit
  2637.   NVIDIA's Xinerama extension from installing, so make sure that the X
  2638.    server's log file does not contain:
  2639.  
  2640.       (++) Xinerama: enabled
  2641.  
  2642.   if you want the NVIDIA driver to be able to provide the Xinerama extension
  2643.   while in TwinView.
  2644.  
  2645.   Another solution is to use panning domains to eliminate inaccessible
  2646.   regions of the virtual screen (see the MetaMode description above).
  2647.  
  2648.   A third solution is to use two separate X screens, rather than use
  2649.   TwinView. See Chapter 15.
  2650.  
  2651.  
  2652. Q. Why can I not get a resolution of 1600x1200 on the second display device
  2653.   when using a GeForce2 MX?
  2654.  
  2655. A. Because the second display device on the GeForce2 MX was designed to be a
  2656.   digital flat panel, the Pixel Clock for the second display device is only
  2657.   150 MHz. This effectively limits the resolution on the second display
  2658.   device to somewhere around 1280x1024 (for a description of how Pixel Clock
  2659.   frequencies limit the programmable modes, see the XFree86 Video Timings
  2660.   HOWTO). This constraint is not present on GeForce4 or GeForce FX GPUs --
  2661.   the maximum pixel clock is the same on both heads.
  2662.  
  2663.  
  2664. Q. Do video overlays work across both display devices?
  2665.  
  2666. A. Hardware video overlays only work on the first display device. The current
  2667.   solution is that blitted video is used instead on TwinView.
  2668.  
  2669.  
  2670. Q. How are virtual screen dimensions determined in TwinView?
  2671.  
  2672. A. After all requested modes have been validated, and the offsets for each
  2673.   MetaMode's viewports have been computed, the NVIDIA driver computes the
  2674.    bounding box of the panning domains for each MetaMode. The maximum bounding
  2675.    box width and height is then found.
  2676.  
  2677.    Note that one side effect of this is that the virtual width and virtual
  2678.    height may come from different MetaModes. Given the following MetaMode
  2679.    string:
  2680.    
  2681.        "1600x1200,NULL; 1024x768+0+0, 1024x768+0+768"
  2682.    
  2683.    the resulting virtual screen size will be 1600 x 1536.
  2684.  
  2685.  
  2686. Q. Can I play full screen games across both display devices?
  2687.  
  2688. A. Yes. While the details of configuration will vary from game to game, the
  2689.    basic idea is that a MetaMode presents X with a mode whose resolution is
  2690.    the bounding box of the viewports for that MetaMode. For example, the
  2691.    following:
  2692.    
  2693.        Option "MetaModes" "1024x768,1024x768; 800x600,800x600"
  2694.        Option "TwinViewOrientation" "RightOf"
  2695.    
  2696.    produce two modes: one whose resolution is 2048x768, and another whose
  2697.    resolution is 1600x600. Games such as Quake 3 Arena use the VidMode
  2698.    extension to discover the resolutions of the modes currently available. To
  2699.    configure Quake 3 Arena to use the above MetaMode string, add the following
  2700.    to your q3config.cfg file:
  2701.    
  2702.        seta r_customaspect "1"
  2703.        seta r_customheight "600"
  2704.        seta r_customwidth  "1600"
  2705.        seta r_fullscreen   "1"
  2706.        seta r_mode         "-1"
  2707.    
  2708.    Note that, given the above configuration, there is no mode with a
  2709.    resolution of 800x600 (remember that the MetaMode "800x600, 800x600" has a
  2710.    resolution of 1600x600"), so if you change Quake 3 Arena to use a
  2711.   resolution of 800x600, it will display in the lower left corner of your
  2712.   screen, with the rest of the screen grayed out. To have single head modes
  2713.   available as well, an appropriate MetaMode string might be something like:
  2714.  
  2715.       "800x600,800x600; 1024x768,NULL; 800x600,NULL; 640x480,NULL"
  2716.  
  2717.   More precise configuration information for specific games is beyond the
  2718.   scope of this document, but the above examples coupled with numerous online
  2719.   sources should be enough to point you in the right direction.
  2720.  
  2721.  
  2722. ______________________________________________________________________________
  2723.  
  2724. Chapter 14. Configuring GLX in Xinerama
  2725. ______________________________________________________________________________
  2726.  
  2727. The NVIDIA Linux Driver supports GLX when Xinerama is enabled on similar GPUs.
  2728. The Xinerama extension takes multiple physical X screens (possibly spanning
  2729. multiple GPUs), and binds them into one logical X screen. This allows windows
  2730. to be dragged between GPUs and to span across multiple GPUs. The NVIDIA driver
  2731. supports hardware accelerated OpenGL rendering across all NVIDIA GPUs when
  2732. Xinerama is enabled.
  2733.  
  2734. To configure Xinerama
  2735.  
  2736.  1. Configure multiple X screens (refer to the XF86Config(5x) or
  2737.     xorg.conf(5x) manpages for details).
  2738.  
  2739.  2. Enable Xinerama by adding the line
  2740.    
  2741.         Option "Xinerama" "True"
  2742.    
  2743.     to the "ServerFlags" section of your X config file.
  2744.  
  2745.  
  2746. Requirements:
  2747.  
  2748.   o Using identical GPUs is recommended. Some combinations of non-identical,
  2749.     but similar, GPUs are supported. If a GPU is incompatible with the rest
  2750.     of a Xinerama desktop then no OpenGL rendering will appear on the screens
  2751.     driven by that GPU. Rendering will still appear normally on screens
  2752.     connected to other supported GPUs. In this situation the X log file will
  2753.     include a message of the form:
  2754.  
  2755.  
  2756.  
  2757. (WW) NVIDIA(2): The GPU driving screen 2 is incompatible with the rest of
  2758. (WW) NVIDIA(2):      the GPUs composing the desktop.  OpenGL rendering will
  2759. (WW) NVIDIA(2):      be disabled on screen 2.
  2760.  
  2761.  
  2762.  
  2763.   o The NVIDIA X driver must be used for all X screens in the server.
  2764.  
  2765.   o Only the intersection of capabilities across all GPUs will be advertised.
  2766.  
  2767.     The maximum OpenGL viewport size depends on the hardware used, and is
  2768.     described by the following table. If an OpenGL window is larger than the
  2769.     maximum viewport, regions beyond the viewport will be blank.
  2770.    
  2771.         OpenGL Viewport Maximums in Xinerama
  2772.        
  2773.         GeForce GPUs before GeForce 8:      4096 x 4096 pixels
  2774.         GeForce 8 and newer GPUs:           8192 x 8192 pixels
  2775.         Quadro:                             as large as the Xinerama
  2776.                                             desktop
  2777.    
  2778.    
  2779.   o X configuration options that affect GLX operation (e.g.: stereo,
  2780.     overlays) should be set consistently across all X screens in the X
  2781.     server.
  2782.  
  2783.  
  2784. Known Issues:
  2785.  
  2786.   o Versions of XFree86 prior to 4.5 and versions of X.Org prior to 6.8.0
  2787.     lack the required interfaces to properly implement overlays with the
  2788.     Xinerama extension. On earlier server versions mixing overlays and
  2789.     Xinerama will result in rendering corruption. If you are using the
  2790.     Xinerama extension with overlays, it is recommended that you upgrade to
  2791.     XFree86 4.5, X.Org 6.8.0, or newer.
  2792.  
  2793.  
  2794. ______________________________________________________________________________
  2795.  
  2796. Chapter 15. Configuring Multiple X Screens on One Card
  2797. ______________________________________________________________________________
  2798.  
  2799. GPUs that support TwinView (Chapter 13) can also be configured to treat each
  2800. connected display device as a separate X screen.
  2801.  
  2802. While there are several disadvantages to this approach as compared to TwinView
  2803. (e.g.: windows cannot be dragged between X screens, hardware accelerated
  2804. OpenGL cannot span the two X screens), it does offer several advantages over
  2805. TwinView:
  2806.  
  2807.   o If each display device is a separate X screen, then properties that may
  2808.     vary between X screens may vary between displays (e.g.: depth, root
  2809.     window size, etc).
  2810.  
  2811.   o Hardware that can only be used on one display at a time (e.g.: video
  2812.     overlays, hardware accelerated RGB overlays), and which consequently
  2813.     cannot be used at all when in TwinView, can be exposed on the first X
  2814.     screen when each display is a separate X screen.
  2815.  
  2816.   o TwinView is a fairly new feature. X has historically used one screen per
  2817.     display device.
  2818.  
  2819.  
  2820. To configure two separate X screens to share one graphics card, here is what
  2821. you will need to do:
  2822.  
  2823. First, create two separate Device sections, each listing the BusID of the
  2824. graphics card to be shared and listing the driver as "nvidia", and assign each
  2825. a separate screen:
  2826.  
  2827.    Section "Device"
  2828.        Identifier  "nvidia0"
  2829.        Driver      "nvidia"
  2830.        # Edit the BusID with the location of your graphics card
  2831.        BusID       "PCI:2:0:0"
  2832.        Screen      0
  2833.    EndSection
  2834.  
  2835.    Section "Device"
  2836.        Identifier  "nvidia1"
  2837.        Driver      "nvidia"
  2838.        # Edit the BusID with the location of your graphics card
  2839.        BusId       "PCI:2:0:0"
  2840.        Screen      1
  2841.    EndSection
  2842.  
  2843. Then, create two Screen sections, each using one of the Device sections:
  2844.  
  2845.    Section "Screen"
  2846.        Identifier  "Screen0"
  2847.        Device      "nvidia0"
  2848.        Monitor     "Monitor0"
  2849.        DefaultDepth 24
  2850.        Subsection "Display"
  2851.            Depth       24
  2852.            Modes       "1600x1200" "1024x768" "800x600" "640x480"
  2853.        EndSubsection
  2854.    EndSection
  2855.  
  2856.    Section "Screen"
  2857.        Identifier  "Screen1"
  2858.        Device      "nvidia1"
  2859.        Monitor     "Monitor1"
  2860.        DefaultDepth 24
  2861.        Subsection "Display"
  2862.            Depth       24
  2863.            Modes       "1600x1200" "1024x768" "800x600" "640x480"
  2864.        EndSubsection
  2865.    EndSection
  2866.  
  2867. (Note: You'll also need to create a second Monitor section) Finally, update
  2868. the ServerLayout section to use and position both Screen sections:
  2869.  
  2870.    Section "ServerLayout"
  2871.        ...
  2872.        Screen         0 "Screen0"
  2873.        Screen         1 "Screen1" leftOf "Screen0"
  2874.        ...
  2875.    EndSection
  2876.  
  2877. For further details, refer to the XF86Config(5x) or xorg.conf(5x) manpages.
  2878.  
  2879. ______________________________________________________________________________
  2880.  
  2881. Chapter 16. Configuring TV-Out
  2882. ______________________________________________________________________________
  2883.  
  2884. NVIDIA GPU-based graphics cards with a TV-Out connector can use a television
  2885. as another display device (the same way that it would use a CRT or digital
  2886. flat panel). The TV can be used by itself, or in conjunction with another
  2887. display device in a TwinView or multiple X screen configuration. If a TV is
  2888. the only display device connected to your graphics card, it will be used as
  2889. the primary display when you boot your system (i.e. the console will come up
  2890. on the TV just as if it were a CRT).
  2891.  
  2892. The NVIDIA X driver populates the mode pool for the TV with all the mode sizes
  2893. that the driver supports with the given TV standard and the TV encoder on the
  2894. graphics card. These modes are given names that correspond to their
  2895. resolution; e.g., "800x600".
  2896.  
  2897. Because these TV modes only depend on the TV encoder and the TV standard, TV
  2898. modes do not go through normal mode validation. The X configuration options
  2899. HorizSync and VertRefresh are not used for TV mode validation.
  2900.  
  2901. Additionally, the NVIDIA driver contains a hardcoded list of mode sizes that
  2902. it can drive for each combination of TV encoder and TV standard. Therefore,
  2903. custom modelines in your X configuration file are ignored for TVs.
  2904.  
  2905. To use your TV with X, there are several relevant X configuration options:
  2906.  
  2907.   o The Modes in the screen section of your X configuration file; you can use
  2908.     these to request any of the modes in the mode pool which the X driver
  2909.     created for this combination of TV standard and TV encoder. Examples
  2910.     include "640x480" and "800x600". If in doubt, use "nvidia-auto-select".
  2911.  
  2912.   o The "TVStandard" option should be added to your screen section; valid
  2913.     values are:
  2914.    
  2915.         TVStandard       Description
  2916.         -------------    --------------------------------------------------
  2917.         "PAL-B"          used in Belgium, Denmark, Finland, Germany,
  2918.                          Guinea, Hong Kong, India, Indonesia, Italy,
  2919.                          Malaysia, The Netherlands, Norway, Portugal,
  2920.                          Singapore, Spain, Sweden, and Switzerland
  2921.         "PAL-D"          used in China and North Korea
  2922.         "PAL-G"          used in Denmark, Finland, Germany, Italy,
  2923.                          Malaysia, The Netherlands, Norway, Portugal,
  2924.                          Spain, Sweden, and Switzerland
  2925.         "PAL-H"          used in Belgium
  2926.         "PAL-I"          used in Hong Kong and The United Kingdom
  2927.         "PAL-K1"         used in Guinea
  2928.         "PAL-M"          used in Brazil
  2929.         "PAL-N"          used in France, Paraguay, and Uruguay
  2930.         "PAL-NC"         used in Argentina
  2931.         "NTSC-J"         used in Japan
  2932.         "NTSC-M"         used in Canada, Chile, Colombia, Costa Rica,
  2933.                          Ecuador, Haiti, Honduras, Mexico, Panama, Puerto
  2934.                          Rico, South Korea, Taiwan, United States of
  2935.                          America, and Venezuela
  2936.         "HD480i"         480 line interlaced
  2937.         "HD480p"         480 line progressive
  2938.         "HD720p"         720 line progressive
  2939.         "HD1080i"        1080 line interlaced
  2940.         "HD1080p"        1080 line progressive
  2941.         "HD576i"         576 line interlace
  2942.         "HD576p"         576 line progressive
  2943.    
  2944.     The line in your X config file should be something like:
  2945.    
  2946.         Option "TVStandard" "NTSC-M"
  2947.    
  2948.     If you do not specify a TVStandard, or you specify an invalid value, the
  2949.     default "NTSC-M" will be used. Note: if your country is not in the above
  2950.     list, select the country closest to your location.
  2951.  
  2952.   o The "UseDisplayDevice" option can be used if there are multiple display
  2953.     devices connected, and you want the connected TV to be used instead of
  2954.     the connected CRTs and/or DFPs. E.g.,
  2955.    
  2956.         Option "UseDisplayDevice" "TV"
  2957.    
  2958.     Using the "UseDisplayDevice" option, rather than the "ConnectedMonitor"
  2959.     option, is recommended.
  2960.  
  2961.   o The "TVOutFormat" option can be used to force the output format. Without
  2962.     this option, the driver autodetects the output format. Unfortunately, it
  2963.     does not always do this correctly. The output format can be forced with
  2964.     the "TVOutFormat" option; valid values are:
  2965.    
  2966.         TVOutFormat            Description            Supported TV
  2967.                                                       standards
  2968.         -------------------    -------------------    -------------------
  2969.         "AUTOSELECT"           The driver             PAL, NTSC, HD
  2970.                                autodetects the    
  2971.                                output format      
  2972.                                (default value).  
  2973.         "COMPOSITE"            Force Composite        PAL, NTSC
  2974.                                output format      
  2975.         "SVIDEO"               Force S-Video          PAL, NTSC
  2976.                                output format      
  2977.         "COMPONENT"            Force Component        HD
  2978.                                output format, also
  2979.                                called YPrPp      
  2980.         "SCART"                Force Scart output     PAL, NTSC
  2981.                                format, also called
  2982.                                Peritel            
  2983.    
  2984.     The line in your X config file should be something like:
  2985.    
  2986.         Option "TVOutFormat" "SVIDEO"
  2987.    
  2988.    
  2989.   o The "TVOverScan" option can be used to enable Overscan, when the TV
  2990.     encoder supports it. Valid values are decimal values in the range 1.0
  2991.     (which means overscan as much as possible: make the image as large as
  2992.     possible) and 0.0 (which means disable overscanning: make the image as
  2993.     small as possible). Overscanning is disabled (0.0) by default.
  2994.  
  2995. The NVIDIA X driver may not restore the console correctly with XFree86
  2996. versions older than 4.3 when the console is a TV. This is due to binary
  2997. incompatibilities between XFree86 int10 modules. If you use a TV as your
  2998. console it is recommended that you upgrade to XFree86 4.3 or later.
  2999.  
  3000. ______________________________________________________________________________
  3001.  
  3002. Chapter 17. Using the XRandR Extension
  3003. ______________________________________________________________________________
  3004.  
  3005. X.Org version X11R6.8.1 contains support for the rotation component of the
  3006. XRandR extension, which allows screens to be rotated at 90 degree increments.
  3007.  
  3008. The driver supports rotation with the extension when 'Option "RandRRotation"'
  3009. is enabled in the X config file.
  3010.  
  3011. Workstation RGB or CI overlay visuals will function at lower performance and
  3012. the video overlay will not be available when RandRRotation is enabled.
  3013.  
  3014. You can query the available rotations using the 'xrandr' command line
  3015. interface to the RandR extension by running:
  3016.  
  3017.    xrandr -q
  3018.  
  3019. You can set the rotation orientation of the screen by running any of:
  3020.  
  3021.    xrandr -o left
  3022.    xrandr -o right
  3023.    xrandr -o inverted
  3024.    xrandr -o normal
  3025.  
  3026. Rotation may also be set through the nvidia-settings configuration utility in
  3027. the "Rotation Settings" panel.
  3028.  
  3029. SLI and rotation are incompatible. Rotation will be disabled when SLI is
  3030. enabled.
  3031.  
  3032. TwinView and rotation can be used together, but rotation affects the entire
  3033. desktop. This means that the same rotation setting will apply to both display
  3034. devices in a TwinView pair. Note also that the "TwinViewOrientation" option
  3035. applies before rotation does. For example, if you have two screens
  3036. side-by-side and you want to rotate them, you should set "TwinViewOrientation"
  3037. to "Above" or "Below".
  3038.  
  3039. ______________________________________________________________________________
  3040.  
  3041. Chapter 18. Configuring a Notebook
  3042. ______________________________________________________________________________
  3043.  
  3044.  
  3045. 18A. INSTALLATION AND CONFIGURATION
  3046.  
  3047. Installation and configuration of the NVIDIA Linux Driver Set on a notebook is
  3048. the same as for any desktop environment, with a few additions, as described
  3049. below.
  3050.  
  3051.  
  3052. 18B. POWER MANAGEMENT
  3053.  
  3054. All notebook NVIDIA GPUs support power management, both S3 (also known as
  3055. "Standby" or "Suspend to RAM") and S4 (also known as "Hibernate", "Suspend to
  3056. Disk" or "SWSUSP"). Power management is system-specific and is dependent upon
  3057. all the components in the system; some systems may be more problematic than
  3058. other systems.
  3059.  
  3060. Most recent notebook NVIDIA GPUs also support PowerMizer, which monitors
  3061. application work load to adjust system parameters to deliver the optimal
  3062. balance of performance and battery life. However, PowerMizer is only enabled
  3063. by default on some notebooks. Please see the known issues below for more
  3064. details.
  3065.  
  3066.  
  3067. 18C. HOTKEY SWITCHING OF DISPLAY DEVICES
  3068.  
  3069. Mobile NVIDIA GPUs also have the capacity to react to a display change hotkey
  3070. event, toggling between each of the connected display devices and each
  3071. possible combination of the connected display devices (note that only 2
  3072. display devices may be active at a time).
  3073.  
  3074. Hotkey switching dynamically changes the TwinView configuration; a given
  3075. hotkey event will indicate which display devices should be in use at that
  3076. time, and all MetaModes currently configured on the X screen will be updated
  3077. to use the new configuration of display devices.
  3078.  
  3079. Another important aspect of hotkey functionality is that you can dynamically
  3080. connect and remove display devices to/from your notebook and use the hotkey to
  3081. activate and deactivate them without restarting X.
  3082.  
  3083. Note that there are two approaches to implementing this hotkey support: ACPI
  3084. events and polling.
  3085.  
  3086. Most recent notebooks use ACPI events to deliver hotkeys from the System BIOS
  3087. to the graphics driver. This is the preferred method of delivering hotkey
  3088. events, but is still a new feature under most UNIX platforms and may not
  3089. always function correctly.
  3090.  
  3091. The polling mechanism requires checking during the vertical blanking interval
  3092. for a hotkey status change. It is an older mechanism for handling hotkeys, and
  3093. is therefore not supported on all notebooks and is not tested by notebook
  3094. manufacturers. It also does not always report the same combinations of display
  3095. devices that are reported by ACPI hotkey events.
  3096.  
  3097. The NVIDIA Linux Driver will attempt to use ACPI hotkey events, if possible.
  3098. In the case that ACPI hotkey event support is not available, the driver will
  3099. revert back to trying hotkey polling. In the case that the notebook does not
  3100. support hotkey polling, hotkeys will not work. Please see the known issues
  3101. section below for more details.
  3102.  
  3103. When switching away from X to a virtual terminal, the VGA console will always
  3104. be restored to the display device on which it was present when X was started.
  3105. Similarly, when switching back into X, the same display device configuration
  3106. will be used as when you switched away, regardless of what display change
  3107. hotkey activity occurred while the virtual terminal was active.
  3108.  
  3109.  
  3110. 18D. DOCKING EVENTS
  3111.  
  3112. All notebook NVIDIA GPUs support docking, however support may be limited by
  3113. the OS or system. There are three types of notebook docking (hot, warm, and
  3114. cold), which refer to the state of the system when the docking event occurs.
  3115. hot refers to a powered on system with a live desktop, warm refers to a system
  3116. that has entered a suspended power management state, and cold refers to a
  3117. system that has been powered off. Only warm and cold docking are supported by
  3118. the NVIDIA driver.
  3119.  
  3120.  
  3121. 18E. TWINVIEW
  3122.  
  3123. All notebook NVIDIA GPUs support TwinView. TwinView on a notebook can be
  3124. configured in the same way as on a desktop computer (refer to Chapter 13 );
  3125. note that in a TwinView configuration using the notebook's internal flat panel
  3126. and an external CRT, the CRT is the primary display device (specify its
  3127. HorizSync and VertRefresh in the Monitor section of your X config file) and
  3128. the flat panel is the secondary display device (specify its HorizSync and
  3129. VertRefresh through the SecondMonitorHorizSync and SecondMonitorVertRefresh
  3130. options).
  3131.  
  3132. The "UseEdidFreqs" X config option is enabled by default, so normally you
  3133. should not need to specify the "SecondMonitorHorizSync" and
  3134. "SecondMonitorVertRefresh" options. See the description of the UseEdidFreqs
  3135. option in Appendix B for details).
  3136.  
  3137.  
  3138. 18F. KNOWN NOTEBOOK ISSUES
  3139.  
  3140. There are a few known issues associated with notebooks:
  3141.  
  3142.   o Display change hotkey switching is not available on all notebooks. In
  3143.     some cases, the ACPI infrastructure is not fully supported by the NVIDIA
  3144.     Linux Driver. Work is ongoing to increase the robustness of NVIDIA's
  3145.     support in this area. Toshiba and Lenovo notebooks are known to be
  3146.     problematic.
  3147.  
  3148.   o ACPI Display change hotkey switching is not supported by X.Org X servers
  3149.     earlier than 1.2.0; see EnableACPIHotkeys in Appendix B for details.
  3150.  
  3151.   o In many cases, suspending and/or resuming will fail. As mentioned above,
  3152.     this functionality is very system-specific. There are still many cases
  3153.     that are problematic. Here are some tips that may help:
  3154.    
  3155.        o In some cases, hibernation can have bad interactions with the PCI
  3156.          Express bus clocks, which can lead to system hangs when entering
  3157.          hibernation. This issue is still being investigated, but a known
  3158.          workaround is to leave an OpenGL application running when
  3159.          hibernating.
  3160.    
  3161.        o On notebooks with relatively little system memory, repetitive
  3162.          hibernation attempts may fail due to insufficient free memory. This
  3163.          problem can be avoided by running `echo 0 > /sys/power/image_size`,
  3164.          which reduces the image size to be stored during hibernation.
  3165.    
  3166.        o Some distributions use a tool called vbetool to save and restore VGA
  3167.          adapter state. This tool is incompatible with NVIDIA GPUs' Video
  3168.          BIOSes and is likely to lead to problems restoring the GPU and its
  3169.          state. Disabling calls to this tool in your distribution's init
  3170.          scripts may improve power management reliability.
  3171.    
  3172.    
  3173.   o On some notebooks, PowerMizer is not enabled by default. This issue is
  3174.     being investigated, and there is no known workaround.
  3175.  
  3176.   o The video overlay only works on the first display device on which you
  3177.     started X. For example, if you start X on the internal LCD, run a video
  3178.     application that uses the video overlay (uses the "Video Overlay" adapter
  3179.     advertised through the XV extension), and then hotkey switch to add a
  3180.     second display device, the video will not appear on the second display
  3181.     device. To work around this, you can either configure the video
  3182.     application to use the "Video Blitter" adapter advertised through the XV
  3183.     extension (this is always available), or hotkey switch to the display
  3184.     device on which you want to use the video overlay *before* starting X.
  3185.  
  3186.  
  3187. ______________________________________________________________________________
  3188.  
  3189. Chapter 19. Programming Modes
  3190. ______________________________________________________________________________
  3191.  
  3192. The NVIDIA Accelerated Linux Graphics Driver supports all standard VGA and
  3193. VESA modes, as well as most user-written custom mode lines; double-scan modes
  3194. are supported on all hardware. Interlaced modes are supported on all GeForce
  3195. FX/Quadro FX and newer GPUs, and certain older GPUs; the X log file will
  3196. contain a message "Interlaced video modes are supported on this GPU" if
  3197. interlaced modes are supported.
  3198.  
  3199. To request one or more standard modes for use in X, you can simply add a
  3200. "Modes" line such as:
  3201.  
  3202.    Modes "1600x1200" "1024x768" "640x480"
  3203.  
  3204. in the appropriate Display subsection of your X config file (see the
  3205. XF86Config(5x) or xorg.conf(5x) man pages for details). Or, the
  3206. nvidia-xconfig(1) utility can be used to request additional modes; for
  3207. example:
  3208.  
  3209.    nvidia-xconfig --mode 1600x1200
  3210.  
  3211. See the nvidia-xconfig(1) man page for details.
  3212.  
  3213.  
  3214. 19A. DEPTH, BITS PER PIXEL, AND PITCH
  3215.  
  3216. While not directly a concern when programming modes, the bits used per pixel
  3217. is an issue when considering the maximum programmable resolution; for this
  3218. reason, it is worthwhile to address the confusion surrounding the terms
  3219. "depth" and "bits per pixel". Depth is how many bits of data are stored per
  3220. pixel. Supported depths are 8, 15, 16, and 24. Most video hardware, however,
  3221. stores pixel data in sizes of 8, 16, or 32 bits; this is the amount of memory
  3222. allocated per pixel. When you specify your depth, X selects the bits per pixel
  3223. (bpp) size in which to store the data. Below is a table of what bpp is used
  3224. for each possible depth:
  3225.  
  3226.    Depth                                 BPP
  3227.    ----------------------------------    ----------------------------------
  3228.    8                                     8
  3229.    15                                    16
  3230.    16                                    16
  3231.    24                                    32
  3232.  
  3233. Lastly, the "pitch" is how many bytes in the linear frame buffer there are
  3234. between one pixel's data, and the data of the pixel immediately below. You can
  3235. think of this as the horizontal resolution multiplied by the bytes per pixel
  3236. (bits per pixel divided by 8). In practice, the pitch may be more than this
  3237. product due to alignment constraints.
  3238.  
  3239.  
  3240. 19B. MAXIMUM RESOLUTIONS
  3241.  
  3242. The NVIDIA Accelerated Linux Graphics Driver and NVIDIA GPU-based graphics
  3243. cards support resolutions up to 8192x8192 pixels for the GeForce 8 series and
  3244. above, and up to 4096x4096 pixels for the GeForce 7 series and below, though
  3245. the maximum resolution your system can support is also limited by the amount
  3246. of video memory (see USEFUL FORMULAS for details) and the maximum supported
  3247. resolution of your display device (monitor/flat panel/television). Also note
  3248. that while use of a video overlay does not limit the maximum resolution or
  3249. refresh rate, video memory bandwidth used by a programmed mode does affect the
  3250. overlay quality.
  3251.  
  3252.  
  3253. 19C. USEFUL FORMULAS
  3254.  
  3255. The maximum resolution is a function both of the amount of video memory and
  3256. the bits per pixel you elect to use:
  3257.  
  3258. HR * VR * (bpp/8) = Video Memory Used
  3259.  
  3260. In other words, the amount of video memory used is equal to the horizontal
  3261. resolution (HR) multiplied by the vertical resolution (VR) multiplied by the
  3262. bytes per pixel (bits per pixel divided by eight). Technically, the video
  3263. memory used is actually the pitch times the vertical resolution, and the pitch
  3264. may be slightly greater than (HR * (bpp/8)) to accommodate the hardware
  3265. requirement that the pitch be a multiple of some value.
  3266.  
  3267. Note that this is just memory usage for the frame buffer; video memory is also
  3268. used by other things, such as OpenGL and pixmap caching.
  3269.  
  3270. Another important relationship is that between the resolution, the pixel clock
  3271. (aka dot clock) and the vertical refresh rate:
  3272.  
  3273. RR = PCLK / (HFL * VFL)
  3274.  
  3275. In other words, the refresh rate (RR) is equal to the pixel clock (PCLK)
  3276. divided by the total number of pixels: the horizontal frame length (HFL)
  3277. multiplied by the vertical frame length (VFL) (note that these are the frame
  3278. lengths, and not just the visible resolutions). As described in the XFree86
  3279. Video Timings HOWTO, the above formula can be rewritten as:
  3280.  
  3281. PCLK = RR * HFL * VFL
  3282.  
  3283. Given a maximum pixel clock, you can adjust the RR, HFL and VFL as desired, as
  3284. long as the product of the three is consistent. The pixel clock is reported in
  3285. the log file. Your X log should contain a line like this:
  3286.  
  3287.    (--) NVIDIA(0): ViewSonic VPD150 (DFP-1): 165 MHz maximum pixel clock
  3288.  
  3289. which indicates the maximum pixel clock for that display device.
  3290.  
  3291.  
  3292. 19D. HOW MODES ARE VALIDATED
  3293.  
  3294. In traditional XFree86/X.Org mode validation, the X server takes as a starting
  3295. point the X server's internal list of VESA standard modes, plus any modes
  3296. specified with special ModeLines in the X configuration file's Monitor
  3297. section. These modes are validated against criteria such as the valid
  3298. HorizSync/VertRefresh frequency ranges for the user's monitor (as specified in
  3299. the Monitor section of the X configuration file), as well as the maximum pixel
  3300. clock of the GPU.
  3301.  
  3302. Once the X server has determined the set of valid modes, it takes the list of
  3303. user requested modes (i.e., the set of modes named in the "Modes" line in the
  3304. Display subsection of the Screen section of X configuration file), and finds
  3305. the "best" validated mode with the requested name.
  3306.  
  3307. The NVIDIA X driver uses a variation on the above approach to perform mode
  3308. validation. During X server initialization, the NVIDIA X driver builds a pool
  3309. of valid modes for each display device. It gathers all possible modes from
  3310. several sources:
  3311.  
  3312.   o The display device's EDID
  3313.  
  3314.   o The X server's built-in list
  3315.  
  3316.   o Any user-specified ModeLines in the X configuration file
  3317.  
  3318.   o The VESA standard modes
  3319.  
  3320. For every possible mode, the mode is run through mode validation. The core of
  3321. mode validation is still performed similarly to traditional XFree86/X.Org mode
  3322. validation: the mode timings are checked against things such as the valid
  3323. HorizSync and VertRefresh ranges and the maximum pixelclock. Note that each
  3324. individual stage of mode validation can be independently controlled through
  3325. the "ModeValidation" X configuration option.
  3326.  
  3327. Note that when validating interlaced mode timings, VertRefresh specifies the
  3328. field rate, rather than the frame rate. For example, the following modeline
  3329. has a vertical refresh rate of 87 Hz:
  3330.  
  3331.  
  3332. # 1024x768i @ 87Hz (industry standard)
  3333. ModeLine "1024x768"  44.9  1024 1032 1208 1264  768 768 776 817 +hsync +vsync
  3334. Interlace
  3335.  
  3336.  
  3337. Invalid modes are discarded; valid modes are inserted into the mode pool. See
  3338. MODE VALIDATION REPORTING for how to get more details on mode validation
  3339. results for each considered mode.
  3340.  
  3341. Valid modes are given a unique name that is guaranteed to be unique across the
  3342. whole mode pool for this display device. This mode name is constructed
  3343. approximately like this:
  3344.  
  3345.    <width>x<height>_<refreshrate>
  3346.  
  3347. (e.g., "1600x1200_85")
  3348.  
  3349. The name may also be prepended with another number to ensure the mode is
  3350. unique; e.g., "1600x1200_85_0".
  3351.  
  3352. As validated modes are inserted into the mode pool, duplicate modes are
  3353. removed, and the mode pool is sorted, such that the "best" modes are at the
  3354. beginning of the mode pool. The sorting is based roughly on:
  3355.  
  3356.   o Resolution
  3357.  
  3358.   o Source (EDID-provided modes are prioritized higher than VESA-provided
  3359.     modes, which are prioritized higher than modes that were in the X
  3360.     server's built-in list)
  3361.  
  3362.   o Refresh rate
  3363.  
  3364. Once modes from all mode sources are validated and the mode pool is
  3365. constructed, all modes with the same resolution are compared; the best mode
  3366. with that resolution is added to the mode pool a second time, using just the
  3367. resolution as its unique modename (e.g., "1600x1200"). In this way, when you
  3368. request a mode using the traditional names (e.g., "1600x1200"), you still get
  3369. what you got before (the 'best' 1600x1200 mode); the added benefit is that all
  3370. modes in the mode pool can be addressed by a unique name.
  3371.  
  3372. When verbose logging is enabled (see the FAQ section on increasing the amount
  3373. of data printed in the X log file), the mode pool for each display device is
  3374. printed to the X log file.
  3375.  
  3376. After the mode pool is built for all display devices, the requested modes (as
  3377. specified in the X configuration file), are looked up from the mode pool. Each
  3378. requested mode that can be matched against a mode in the mode pool is then
  3379. advertised to the X server and is available to the user through the X server's
  3380. mode switching hotkeys (ctrl-alt-plus/minus) and the XRandR and XF86VidMode X
  3381. extensions.
  3382.  
  3383. If only one display device is in use by the X screen when the X server starts,
  3384. all modes in the mode pool are implicitly made available to the X server. See
  3385. the "IncludeImplicitMetaModes" X configuration option in Appendix B for
  3386. details.
  3387.  
  3388.  
  3389. 19E. THE NVIDIA-AUTO-SELECT MODE
  3390.  
  3391. You can request a special mode by name in the X config file, named
  3392. "nvidia-auto-select". When the X driver builds the mode pool for a display
  3393. device, it selects one of the modes as the "nvidia-auto-select" mode; a new
  3394. entry is made in the mode pool, and "nvidia-auto-select" is used as the unique
  3395. name for the mode.
  3396.  
  3397. The "nvidia-auto-select" mode is intended to be a reasonable mode for the
  3398. display device in question. For example, the "nvidia-auto-select" mode is
  3399. normally the native resolution for flatpanels, as reported by the flatpanel's
  3400. EDID, or one of the detailed timings from the EDID. The "nvidia-auto-select"
  3401. mode is guaranteed to always be present, and to always be defined as something
  3402. considered valid by the X driver for this display device.
  3403.  
  3404. Note that the "nvidia-auto-select" mode is not necessarily the largest
  3405. possible resolution, nor is it necessarily the mode with the highest refresh
  3406. rate. Rather, the "nvidia-auto-select" mode is selected such that it is a
  3407. reasonable default. The selection process is roughly:
  3408.  
  3409.  
  3410.   o If the EDID for the display device reported a preferred mode timing, and
  3411.     that mode timing is considered a valid mode, then that mode is used as
  3412.     the "nvidia-auto-select" mode. You can check if the EDID reported a
  3413.     preferred timing by starting X with logverbosity greater than or equal to
  3414.     5 (see the FAQ section on increasing the amount of data printed in the X
  3415.     log file), and looking at the EDID printout; if the EDID contains a line:
  3416.    
  3417.         Prefer first detailed timing : Yes
  3418.    
  3419.     Then the first mode listed under the "Detailed Timings" in the EDID will
  3420.     be used.
  3421.  
  3422.   o If the EDID did not provide a preferred timing, the best detailed timing
  3423.     from the EDID is used as the "nvidia-auto-select" mode.
  3424.  
  3425.   o If the EDID did not provide any detailed timings (or there was no EDID at
  3426.     all), the best valid mode not larger than 1024x768 is used as the
  3427.     "nvidia-auto-select" mode. The 1024x768 limit is imposed here to restrict
  3428.     use of modes that may have been validated, but may be too large to be
  3429.     considered a reasonable default, such as 2048x1536.
  3430.  
  3431.   o If all else fails, the X driver will use a built-in 800 x 600 60Hz mode
  3432.     as the "nvidia-auto-select" mode.
  3433.  
  3434.  
  3435. If no modes are requested in the X configuration file, or none of the
  3436. requested modes can be found in the mode pool, then the X driver falls back to
  3437. the "nvidia-auto-select" mode, so that X can always start. Appropriate warning
  3438. messages will be printed to the X log file in these fallback scenarios.
  3439.  
  3440. You can add the "nvidia-auto-select" mode to your X configuration file by
  3441. running the command
  3442.  
  3443.    nvidia-xconfig --mode nvidia-auto-select
  3444.  
  3445. and restarting your X server.
  3446.  
  3447. The X driver can generally do a much better job of selecting the
  3448. "nvidia-auto-select" mode if the display device's EDID is available. This is
  3449. one reason why the "IgnoreEDID" X configuration option has been deprecated,
  3450. and that it is recommended to only use the "UseEDID" X configuration option
  3451. sparingly. Note that, rather than globally disable all uses of the EDID with
  3452. the "UseEDID" option, you can individually disable each particular use of the
  3453. EDID using the "UseEDIDFreqs", "UseEDIDDpi", and/or the "NoEDIDModes" argument
  3454. in the "ModeValidation" X configuration option.
  3455.  
  3456.  
  3457. 19F. MODE VALIDATION REPORTING
  3458.  
  3459. When log verbosity is set to 6 or higher (see FAQ
  3460. section on increasing the amount of data printed in the X log file), the X log
  3461. will record every mode that is considered for each display device's mode pool,
  3462. and report whether the mode passed or failed. For modes that were considered
  3463. invalid, the log will report why the mode was considered invalid.
  3464.  
  3465.  
  3466. 19G. ENSURING IDENTICAL MODE TIMINGS
  3467.  
  3468. Some functionality, such as Active Stereo with TwinView, requires control over
  3469. exactly which mode timings are used. For explicit control over which mode
  3470. timings are used on each display device, you can specify the ModeLine you want
  3471. to use (using one of the ModeLine generators available), and using a unique
  3472. name. For example, if you wanted to use 1024x768 at 120 Hz on each monitor in
  3473. TwinView with active stereo, you might add something like this to the monitor
  3474. section of your X configuration file:
  3475.  
  3476.    # 1024x768 @ 120.00 Hz (GTF) hsync: 98.76 kHz; pclk: 139.05 MHz
  3477.    Modeline "1024x768_120"  139.05  1024 1104 1216 1408  768 769 772 823
  3478. -HSync +Vsync
  3479.  
  3480. Then, in the Screen section of your X config file, specify a MetaMode like
  3481. this:
  3482.  
  3483.    Option "MetaModes" "1024x768_120, 1024x768_120"
  3484.  
  3485.  
  3486.  
  3487. 19H. ADDITIONAL INFORMATION
  3488.  
  3489. An XFree86 ModeLine generator, conforming to the GTF Standard is available at
  3490. http://gtf.sourceforge.net/. Additional generators can be found by searching
  3491. for "modeline" on freshmeat.net.
  3492.  
  3493. ______________________________________________________________________________
  3494.  
  3495. Chapter 20. Configuring Flipping and UBB
  3496. ______________________________________________________________________________
  3497.  
  3498. The NVIDIA Accelerated Linux Graphics Driver supports Unified Back Buffer
  3499. (UBB) and OpenGL Flipping. These features can provide performance gains in
  3500. certain situations.
  3501.  
  3502.   o Unified Back Buffer (UBB): UBB is available only on the Quadro family of
  3503.     GPUs (Quadro4 NVS excluded) and is enabled by default when there is
  3504.     sufficient video memory available. This can be disabled with the UBB X
  3505.     config option described in Appendix B. When UBB is enabled, all windows
  3506.     share the same back, stencil and depth buffers. When there are many
  3507.     windows, the back, stencil and depth usage will never exceed the size of
  3508.     that used by a full screen window. However, even for a single small
  3509.     window, the back, stencil, and depth video memory usage is that of a full
  3510.     screen window. In that case video memory may be used less efficiently
  3511.     than in the non-UBB case.
  3512.  
  3513.   o Flipping: When OpenGL flipping is enabled, OpenGL can perform buffer
  3514.     swaps by changing which buffer the DAC scans out rather than copying the
  3515.     back buffer contents to the front buffer; this is generally a much higher
  3516.     performance mechanism and allows tearless swapping during the vertical
  3517.     retrace (when __GL_SYNC_TO_VBLANK is set). The conditions under which
  3518.     OpenGL can flip are slightly complicated, but in general: on GeForce or
  3519.     newer hardware, OpenGL can flip when a single full screen unobscured
  3520.     OpenGL application is running, and __GL_SYNC_TO_VBLANK is enabled.
  3521.     Additionally, OpenGL can flip on Quadro hardware even when an OpenGL
  3522.     window is partially obscured or not full screen or __GL_SYNC_TO_VBLANK is
  3523.     not enabled.
  3524.  
  3525.  
  3526. ______________________________________________________________________________
  3527.  
  3528. Chapter 21. Using the Proc Filesystem Interface
  3529. ______________________________________________________________________________
  3530.  
  3531. You can use the /proc filesystem interface to obtain run-time information
  3532. about the driver, any installed NVIDIA graphics cards, and the AGP status.
  3533.  
  3534. This information is contained in several files in /proc/driver/nvidia
  3535.  
  3536. /proc/driver/nvidia/version
  3537.  
  3538.    Lists the installed driver revision and the version of the GNU C compiler
  3539.    used to build the Linux kernel module.
  3540.  
  3541. /proc/driver/nvidia/warnings
  3542.  
  3543.    The NVIDIA graphics driver tries to detect potential problems with the
  3544.    host system's kernel and warns about them using the kernel's printk()
  3545.    mechanism, typically logged by the system to '/var/log/messages'.
  3546.  
  3547.    Important NVIDIA warning messages are also logged to dedicated text files
  3548.    in this /proc directory.
  3549.  
  3550. /proc/driver/nvidia/cards/0...3
  3551.  
  3552.    Provide information about each of the installed NVIDIA graphics adapters
  3553.    (model name, IRQ, BIOS version, Bus Type). Note that the BIOS version is
  3554.    only available while X is running.
  3555.  
  3556. /proc/driver/nvidia/agp/card
  3557.  
  3558.    Information about the installed AGP card's AGP capabilities.
  3559.  
  3560. /proc/driver/nvidia/agp/host-bridge
  3561.  
  3562.    Information about the host bridge (model and AGP capabilities).
  3563.  
  3564. /proc/driver/nvidia/agp/status
  3565.  
  3566.    The current AGP status. If AGP support has been enabled on your system,
  3567.    the AGP driver being used, the AGP rate, and information about the status
  3568.    of AGP Fast Writes and Side Band Addressing is shown.
  3569.  
  3570.    The AGP driver is either NVIDIA (NVIDIA built-in AGP driver) or AGPGART
  3571.    (the Linux kernel's agpgart.o driver). If you see "inactive" next to
  3572.    AGPGART, then this means that the AGP chipset was programmed by AGPGART,
  3573.    but is not currently in use.
  3574.  
  3575.    SBA and Fast Writes indicate whether either one of these features is
  3576.    currently in use. Note that several factors determine whether support for
  3577.    either will be enabled. Even if both the AGP card and the host bridge
  3578.    support them, the driver may decide not to use these features in favor of
  3579.    system stability. This is particularly true of AGP Fast Writes.
  3580.  
  3581.  
  3582. ______________________________________________________________________________
  3583.  
  3584. Chapter 22. Configuring Power Management Support
  3585. ______________________________________________________________________________
  3586.  
  3587. The NVIDIA driver includes support for both APM- and ACPI- based power
  3588. management. The NVIDIA Linux driver supports APM-based suspend and resume, as
  3589. well as ACPI standby (S3) and suspend (S4).
  3590.  
  3591. To use APM, your system's BIOS will need to support APM, rather than ACPI.
  3592. Many, but not all, of the GeForce2- and GeForce4-based notebooks include APM
  3593. support. You can check for APM support via the procfs interface (check for the
  3594. existence of /proc/apm) or via the kernel's boot output:
  3595.  
  3596.    % dmesg | grep -i apm
  3597.  
  3598. a message similar to this indicates APM support:
  3599.  
  3600.    apm: BIOS version 1.2 Flags 0x03 (Driver version 1.16)
  3601.  
  3602. or a message like this indicates no APM support:
  3603.  
  3604.    No APM support in Kernel
  3605.  
  3606. Note: If you are using Linux kernel 2.6 and your kernel was configured with
  3607. support for both ACPI and APM, the NVIDIA kernel module will be built with
  3608. ACPI Power Management support. If you wish to use APM, you will need to
  3609. rebuild the Linux kernel without ACPI support and reinstall the NVIDIA Linux
  3610. graphics driver.
  3611.  
  3612. Sometimes chipsets lose their AGP configuration during suspend, and may cause
  3613. corruption on the bus upon resume. The AGP driver is required to save and
  3614. restore relevant register state on such systems; NVIDIA's NvAGP is notified of
  3615. power management events and ensures its configuration is kept intact across
  3616. suspend/resume cycles.
  3617.  
  3618. Linux 2.4 AGPGART does not support power management, Linux 2.6 AGPGART does,
  3619. but only for a few select chipsets. If you use either of these two AGP drivers
  3620. and find your system fails to resume reliably, you may have more success with
  3621. the NvAGP driver.
  3622.  
  3623. Disabling AGP support (see Chapter 12 for more details on disabling AGP) will
  3624. also work around this problem.
  3625.  
  3626. More recent systems are more likely to support ACPI. ACPI is supported by the
  3627. NVIDIA graphics driver in 2.6 and newer kernels. The driver supports ACPI
  3628. standby (S3) and includes beta support for ACPI suspend (S4).
  3629.  
  3630. If you enable ACPI S4 support via suspend2 patches, you will need to tweak the
  3631. Linux kernel such that it dynamically determines the amount of pages needed by
  3632. the drivers that will be suspended in the system. This is done by issuing the
  3633. following command as root:
  3634.  
  3635.    % echo 0 > /sys/power/suspend2/extra_pages_allowance
  3636.  
  3637. Older versions of suspend2 may provide a different interface, in which case
  3638. the following command needs to be issued as root:
  3639.  
  3640.    % echo 0 > /proc/suspend2/extra_pages_allowance
  3641.  
  3642. The system does NOT need rebooting, and as a matter of fact, the setting is
  3643. volatile over reboots. You will need to include the tweak in your startup
  3644. scripts. However, failure to perform the tweak will result in a hang going to
  3645. sleep. For further information regarding suspend2 patches, see
  3646. http://www.suspend2.net/.
  3647.  
  3648. ______________________________________________________________________________
  3649.  
  3650. Chapter 23. Using the X Composite Extension
  3651. ______________________________________________________________________________
  3652.  
  3653. X.Org X servers, beginning with X11R6.8.0, contain experimental support for a
  3654. new X protocol extension called Composite. This extension allows windows to be
  3655. drawn into pixmaps instead of directly onto the screen. In conjunction with
  3656. the Damage and Render extensions, this allows a program called a composite
  3657. manager to blend windows together to draw the screen.
  3658.  
  3659. Performance will be degraded significantly if the "RenderAccel" option is
  3660. disabled in xorg.conf. See Appendix B for more details.
  3661.  
  3662. When the NVIDIA X driver is used with an X.Org X server X11R6.9.0 or newer and
  3663. the Composite extension is enabled, NVIDIA's OpenGL implementation interacts
  3664. properly with the Damage and Composite X extensions. This means that OpenGL
  3665. rendering is drawn into offscreen pixmaps and the X server is notified of the
  3666. Damage event when OpenGL renders to the pixmap. This allows OpenGL
  3667. applications to behave properly in a composited X desktop.
  3668.  
  3669. If the Composite extension is enabled on an X server older than X11R6.9.0,
  3670. then GLX will be disabled. You can force GLX on while Composite is enabled on
  3671. pre-X11R6.9.0 X servers with the "AllowGLXWithComposite" X configuration
  3672. option. However, GLX will not render correctly in this environment. Upgrading
  3673. your X server to X11R6.9.0 or newer is recommended.
  3674.  
  3675. You can enable the Composite X extension by running 'nvidia-xconfig
  3676. --composite'. Composite can be disabled with 'nvidia-xconfig --no-composite'.
  3677. See the nvidia-xconfig(1) man page for details.
  3678.  
  3679. If you are using Composite with GLX, it is recommended that you also enable
  3680. the "DamageEvents" X option for enhanced performance. If you are using an
  3681. OpenGL-based composite manager, you may also need the "DisableGLXRootClipping"
  3682. option to obtain proper output.
  3683.  
  3684. The Composite extension also causes problems with other driver components:
  3685.  
  3686.   o In X servers prior to X.Org 7.1, Xv cannot draw into pixmaps that have
  3687.     been redirected offscreen and will draw directly onto the screen instead.
  3688.     For some programs you can work around this issue by using an alternative
  3689.     video driver. For example, "mplayer -vo x11" will work correctly, as will
  3690.     "xine -V xshm". If you must use Xv with an older server, you can also
  3691.     disable the compositing manager and re-enable it when you are finished.
  3692.  
  3693.     On X.Org 7.1 and higher, the driver will properly redirect video into
  3694.     offscreen pixmaps. Note that the Xv adaptors will ignore the
  3695.     sync-to-vblank option when drawing into a redirected window.
  3696.  
  3697.   o Workstation overlays, stereo visuals, and the unified back buffer (UBB)
  3698.     are incompatible with Composite. These features will be automatically
  3699.     disabled when Composite is detected.
  3700.  
  3701.  
  3702. This NVIDIA Linux supports OpenGL rendering to 32-bit ARGB windows on X.Org
  3703. 7.2 and higher or when the "AddARGBGLXVisuals" X config file option is
  3704. enabled. If you are an application developer, you can use these new visuals in
  3705. conjunction with a composite manager to create translucent OpenGL
  3706. applications:
  3707.  
  3708.    int attrib[] = {
  3709.        GLX_RENDER_TYPE, GLX_RGBA_BIT,
  3710.        GLX_DRAWABLE_TYPE, GLX_WINDOW_BIT,
  3711.        GLX_RED_SIZE, 1,
  3712.        GLX_GREEN_SIZE, 1,
  3713.        GLX_BLUE_SIZE, 1,
  3714.        GLX_ALPHA_SIZE, 1,
  3715.        GLX_DOUBLEBUFFER, True,
  3716.        GLX_DEPTH_SIZE, 1,
  3717.        None };
  3718.    GLXFBConfig *fbconfigs, fbconfig;
  3719.    int numfbconfigs, render_event_base, render_error_base;
  3720.    XVisualInfo *visinfo;
  3721.    XRenderPictFormat *pictFormat;
  3722.  
  3723.    /* Make sure we have the RENDER extension */
  3724.    if(!XRenderQueryExtension(dpy, &render_event_base, &render_error_base)) {
  3725.        fprintf(stderr, "No RENDER extension found\n");
  3726.        exit(EXIT_FAILURE);
  3727.    }
  3728.  
  3729.    /* Get the list of FBConfigs that match our criteria */
  3730.    fbconfigs = glXChooseFBConfig(dpy, scrnum, attrib, &numfbconfigs);
  3731.    if (!fbconfigs) {
  3732.        /* None matched */
  3733.        exit(EXIT_FAILURE);
  3734.    }
  3735.  
  3736.    /* Find an FBConfig with a visual that has a RENDER picture format that
  3737.     * has alpha */
  3738.    for (i = 0; i < numfbconfigs; i++) {
  3739.        visinfo = glXGetVisualFromFBConfig(dpy, fbconfigs[i]);
  3740.        if (!visinfo) continue;
  3741.        pictFormat = XRenderFindVisualFormat(dpy, visinfo->visual);
  3742.        if (!pictFormat) continue;
  3743.  
  3744.        if(pictFormat->direct.alphaMask > 0) {
  3745.            fbconfig = fbconfigs[i];
  3746.            break;
  3747.        }
  3748.  
  3749.        XFree(visinfo);
  3750.    }
  3751.  
  3752.    if (i == numfbconfigs) {
  3753.        /* None of the FBConfigs have alpha.  Use a normal (opaque)
  3754.         * FBConfig instead */
  3755.        fbconfig = fbconfigs[0];
  3756.        visinfo = glXGetVisualFromFBConfig(dpy, fbconfig);
  3757.        pictFormat = XRenderFindVisualFormat(dpy, visinfo->visual);
  3758.    }
  3759.  
  3760.    XFree(fbconfigs);
  3761.  
  3762.  
  3763. When rendering to a 32-bit window, keep in mind that the X RENDER extension,
  3764. used by most composite managers, expects "premultiplied alpha" colors. This
  3765. means that if your color has components (r,g,b) and alpha value a, then you
  3766. must render (a*r, a*g, a*b, a) into the target window.
  3767.  
  3768. More information about Composite can be found at
  3769. http://freedesktop.org/Software/CompositeExt
  3770.  
  3771. ______________________________________________________________________________
  3772.  
  3773. Chapter 24. Using the nvidia-settings Utility
  3774. ______________________________________________________________________________
  3775.  
  3776. A graphical configuration utility, 'nvidia-settings', is included with the
  3777. NVIDIA Linux graphics driver. After installing the driver and starting X, you
  3778. can run this configuration utility by running:
  3779.  
  3780.    % nvidia-settings
  3781.  
  3782. in a terminal window.
  3783.  
  3784. Detailed information about the configuration options available are documented
  3785. in the help window in the utility.
  3786.  
  3787. For more information, see the nvidia-settings man page.
  3788.  
  3789. The source code to nvidia-settings is released as GPL and is available here:
  3790. ftp://download.nvidia.com/XFree86/nvidia-settings/
  3791.  
  3792. If you have trouble running the nvidia-settings binary shipped with the NVIDIA
  3793. Linux Graphics Driver, refer to the nvidia-settings entry in Chapter 8.
  3794.  
  3795. ______________________________________________________________________________
  3796.  
  3797. Chapter 25. Configuring SLI and Multi-GPU FrameRendering
  3798. ______________________________________________________________________________
  3799.  
  3800. The NVIDIA Linux driver contains support for NVIDIA SLI FrameRendering and
  3801. NVIDIA Multi-GPU FrameRendering. Both of these technologies allow an OpenGL
  3802. application to take advantage of multiple GPUs to improve visual performance.
  3803.  
  3804. The distinction between SLI and Multi-GPU is straightforward. SLI is used to
  3805. leverage the processing power of GPUs across two or more graphics cards, while
  3806. Multi-GPU is used to leverage the processing power of two GPUs colocated on
  3807. the same graphics card. If you want to link together separate graphics cards,
  3808. you should use the "SLI" X config option. Likewise, if you want to link
  3809. together GPUs on the same graphics card, you should use the "MultiGPU" X
  3810. config option. If you have two cards, each with two GPUs, and you wish to link
  3811. them all together, you should use the "SLI" option.
  3812.  
  3813. In Linux, with two GPUs SLI and Multi-GPU can both operate in one of three
  3814. modes: Alternate Frame Rendering (AFR), Split Frame Rendering (SFR), and
  3815. Antialiasing (AA). When AFR mode is active, one GPU draws the next frame while
  3816. the other one works on the frame after that. In SFR mode, each frame is split
  3817. horizontally into two pieces, with one GPU rendering each piece. The split
  3818. line is adjusted to balance the load between the two GPUs. AA mode splits
  3819. antialiasing work between the two GPUs. Both GPUs work on the same scene and
  3820. the result is blended together to produce the final frame. This mode is useful
  3821. for applications that spend most of their time processing with the CPU and
  3822. cannot benefit from AFR.
  3823.  
  3824. With four GPUs, the same options are applicable. AFR mode cycles through all
  3825. four GPUs, each GPU rendering a frame in turn. SFR mode splits the frame
  3826. horizontally into four pieces. AA mode splits the work between the four GPUs,
  3827. allowing antialiasing up to 64x. With four GPUs SLI can also operate in an
  3828. additional mode, Alternate Frame Rendering of Antialiasing. (AFR of AA). With
  3829. AFR of AA, pairs of GPUs render alternate frames, each GPU in a pair doing
  3830. half of the antialiasing work. Note that these scenarios apply whether you
  3831. have four separate cards or you have two cards, each with two GPUs.
  3832.  
  3833. Multi-GPU is enabled by setting the "MultiGPU" option in the X configuration
  3834. file; see Appendix B for details about the "MultiGPU" option.
  3835.  
  3836. The nvidia-xconfig utility can be used to set the "MultiGPU" option, rather
  3837. than modifying the X configuration file by hand. For example:
  3838.  
  3839.    % nvidia-xconfig --multigpu=on
  3840.  
  3841.  
  3842. SLI is enabled by setting the "SLI" option in the X configuration file; see
  3843. Appendix B for details about the SLI option.
  3844.  
  3845. The nvidia-xconfig utility can be used to set the SLI option, rather than
  3846. modifying the X configuration file by hand. For example:
  3847.  
  3848.    % nvidia-xconfig --sli=on
  3849.  
  3850.  
  3851.  
  3852. 25A. HARDWARE REQUIREMENTS
  3853.  
  3854. SLI functionality requires:
  3855.  
  3856.   o Identical PCI-Express graphics cards
  3857.  
  3858.   o A supported motherboard
  3859.  
  3860.   o In most cases, a video bridge connecting the two graphics cards
  3861.  
  3862. For the latest in supported SLI and Multi-GPU configurations, including SLI-
  3863. and Multi-GPU capable GPUs and SLI-capable motherboards, see
  3864. http://www.slizone.com.
  3865.  
  3866.  
  3867. 25B. OTHER NOTES AND REQUIREMENTS
  3868.  
  3869. The following other requirements apply to SLI and Multi-GPU:
  3870.  
  3871.   o Mobile GPUs are NOT supported
  3872.  
  3873.   o SLI on Quadro-based graphics cards always requires a video bridge
  3874.  
  3875.   o TwinView is also not supported with SLI or Multi-GPU. Only one display
  3876.     can be used when SLI or Multi-GPU is enabled.
  3877.  
  3878.   o If X is configured to use multiple screens and screen 0 has SLI or
  3879.     Multi-GPU enabled, the other screens will be disabled. Note that if SLI
  3880.     or Multi-GPU is enabled, the GPUs used by that configuration will be
  3881.     unavailable for single GPU rendering.
  3882.  
  3883.  
  3884.  
  3885. FREQUENTLY ASKED SLI AND MULTI-GPU QUESTIONS
  3886.  
  3887. Q. Why is glxgears slower when SLI or Multi-GPU is enabled?
  3888.  
  3889. A. When SLI or Multi-GPU is enabled, the NVIDIA driver must coordinate the
  3890.   operations of all GPUs when each new frame is swapped (made visible). For
  3891.   most applications, this GPU synchronization overhead is negligible.
  3892.   However, because glxgears renders so many frames per second, the GPU
  3893.   synchronization overhead consumes a significant portion of the total time,
  3894.   and the framerate is reduced.
  3895.  
  3896.  
  3897. Q. Why is Doom 3 slower when SLI or Multi-GPU is enabled?
  3898.  
  3899. A. The NVIDIA Accelerated Linux Graphics Driver does not automatically detect
  3900.   the optimal SLI or Multi-GPU settings for games such as Doom 3 and Quake 4.
  3901.   To work around this issue, the environment variable __GL_DOOM3 can be set
  3902.   to tell OpenGL that Doom 3's optimal settings should be used. In Bash, this
  3903.   can be done in the same command that launches Doom 3 so the environment
  3904.   variable does not remain set for other OpenGL applications started in the
  3905.   same session:
  3906.  
  3907.       % __GL_DOOM3=1 doom3
  3908.  
  3909.   Doom 3's startup script can also be modified to set this environment
  3910.   variable:
  3911.  
  3912.       #!/bin/sh
  3913.       # Needed to make symlinks/shortcuts work.
  3914.       # the binaries must run with correct working directory
  3915.       cd "/usr/local/games/doom3/"
  3916.       export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:.
  3917.       export __GL_DOOM3=1
  3918.       exec ./doom.x86 "$@"
  3919.  
  3920.   This environment variable is temporary and will be removed in the future.
  3921.  
  3922.  
  3923. Q. Why does SLI or MultiGPU fail to initialize?
  3924.  
  3925. A. There are several reasons why SLI or MultiGPU may fail to initialize. Most
  3926.   of these should be clear from the warning message in the X log file; e.g.:
  3927.  
  3928.      o "Unsupported bus type"
  3929.  
  3930.      o "The video link was not detected"
  3931.  
  3932.      o "GPUs do not match"
  3933.  
  3934.      o "Unsupported GPU video BIOS"
  3935.  
  3936.      o "Insufficient PCI-E link width"
  3937.  
  3938.   The warning message "'Unsupported PCI topology'" is likely due to problems
  3939.   with your Linux kernel. The NVIDIA driver must have access to the PCI
  3940.   Bridge (often called the Root Bridge) that each NVIDIA GPU is connected to
  3941.   in order to configure SLI or MultiGPU correctly. There are many kernels
  3942.   that do not properly recognize this bridge and, as a result, do not allow
  3943.   the NVIDIA driver to access this bridge. See the below "How can I determine
  3944.    if my kernel correctly detects my PCI Bridge?" FAQ for details.
  3945.  
  3946.   Below are some specific troubleshooting steps to help deal with SLI and
  3947.   MultiGPU initialization failures.
  3948.  
  3949.      o Make sure that ACPI is enabled in your kernel. NVIDIA's experience
  3950.        has been that ACPI is needed for the kernel to correctly recognize
  3951.        the Root Bridge. Note that in some cases, the kernel's version of
  3952.        ACPI may still have problems and require an update to a newer kernel.
  3953.  
  3954.      o Run 'lspci' to check that multiple NVIDIA GPUs can be identified by
  3955.        the operating system; e.g:
  3956.        
  3957.            % /sbin/lspci | grep -i nvidia
  3958.        
  3959.        If 'lspci' does not report all the GPUs that are in your system, then
  3960.        this is a problem with your Linux kernel, and it is recommended that
  3961.        you use a different kernel.
  3962.  
  3963.      o Make sure you have the most recent SBIOS available for your
  3964.        motherboard.
  3965.  
  3966.      o The PCI-Express slots on the motherboard must provide a minimum link
  3967.        width. Please make sure that the PCI Express slot(s) on your
  3968.        motherboard meet the following requirements and that you have
  3969.        connected the graphics board to the correct PCI Express slot(s):
  3970.        
  3971.           o A dual-GPU board needs a minimum of 8 lanes (i.e. x8 or x16)
  3972.        
  3973.           o A pair of single-GPU boards requires one of the following
  3974.             supported link width combinations:
  3975.            
  3976.                o x16 + x16
  3977.            
  3978.                o x16 + x8
  3979.            
  3980.                o x16 + x4
  3981.            
  3982.                o x8 + x8
  3983.            
  3984.            
  3985.        
  3986.  
  3987.  
  3988. Q. How can I determine if my kernel correctly detects my PCI Bridge?
  3989.  
  3990. A. As discussed above, the NVIDIA driver must have access to the PCI Bridge
  3991.   that each NVIDIA GPU is connected to in order to configure SLI or MultiGPU
  3992.   correctly. The following steps will identify whether the kernel correctly
  3993.   recognizes the PCI Bridge:
  3994.  
  3995.      o Identify both NVIDIA GPUs:
  3996.        
  3997.            % /sbin/lspci | grep -i vga
  3998.        
  3999.            0a:00.0 VGA compatible controller: nVidia Corporation [...]
  4000.            81:00.0 VGA compatible controller: nVidia Corporation [...]
  4001.        
  4002.        
  4003.      o Verify that each GPU is connected to a bus connected to the Root
  4004.        Bridge (note that the GPUs in the above example are on buses 0a and
  4005.        81):
  4006.        
  4007.            % /sbin/lspci -t
  4008.        
  4009.        good:
  4010.        
  4011.            -+-[0000:80]-+-00.0
  4012.             |           +-01.0
  4013.             |           \-0e.0-[0000:81]----00.0
  4014.            ...
  4015.             \-[0000:00]-+-00.0
  4016.                         +-01.0
  4017.                         +-01.1
  4018.                         +-0e.0-[0000:0a]----00.0
  4019.        
  4020.        bad:
  4021.        
  4022.            -+-[0000:81]---00.0
  4023.            ...
  4024.             \-[0000:00]-+-00.0
  4025.                         +-01.0
  4026.                         +-01.1
  4027.                         +-0e.0-[0000:0a]----00.0
  4028.        
  4029.        Note that in the first example, bus 81 is connected to Root Bridge
  4030.        80, but that in the second example there is no Root Bridge 80 and bus
  4031.        81 is incorrectly connected at the base of the device tree. In the
  4032.        bad case, the only solution is to upgrade your kernel to one that
  4033.        properly detects your PCI bus layout.
  4034.  
  4035.  
  4036.  
  4037. ______________________________________________________________________________
  4038.  
  4039. Chapter 26. Configuring Frame Lock and Genlock
  4040. ______________________________________________________________________________
  4041.  
  4042. NOTE: Frame Lock and Genlock features are supported only on specific hardware,
  4043. as noted below.
  4044.  
  4045. Visual computing applications that involve multiple displays, or even multiple
  4046. windows within a display, can require special signal processing and
  4047. application controls in order to function properly. For example, in order to
  4048. produce quality video recording of animated graphics, the graphics display
  4049. must be synchronized with the video camera. As another example, applications
  4050. presented on multiple displays must be synchronized in order to complete the
  4051. illusion of a larger, virtual canvas.
  4052.  
  4053. This synchronization is enabled through the frame lock and genlock
  4054. capabilities of the NVIDIA driver. This section describes the setup and use of
  4055. frame lock and genlock.
  4056.  
  4057.  
  4058. 26A. DEFINITION OF TERMS
  4059.  
  4060. GENLOCK: Genlock refers to the process of synchronizing the pixel scanning of
  4061. one or more displays to an external synchronization source. NVIDIA Genlock
  4062. requires the external signal to be either TTL or composite, such as used for
  4063. NTSC, PAL, or HDTV. It should be noted that the NVIDIA Genlock implementation
  4064. is guaranteed only to be frame-synchronized, and not necessarily
  4065. pixel-synchronized.
  4066.  
  4067. FRAME LOCK: Frame Lock involves the use of hardware to synchronize the frames
  4068. on each display in a connected system. When graphics and video are displayed
  4069. across multiple monitors, frame locked systems help maintain image continuity
  4070. to create a virtual canvas. Frame lock is especially critical for stereo
  4071. viewing, where the left and right fields must be in sync across all displays.
  4072.  
  4073. In short, to enable genlock means to sync to an external signal. To enable
  4074. frame lock means to sync 2 or more display devices to a signal generated
  4075. internally by the hardware, and to use both means to sync 2 or more display
  4076. devices to an external signal.
  4077.  
  4078. SWAP SYNC: Swap sync refers to the synchronization of buffer swaps of multiple
  4079. application windows. By means of swap sync, applications running on multiple
  4080. systems can synchronize the application buffer swaps between all the systems.
  4081. In order to work across multiple systems, swap sync requires that the systems
  4082. are frame locked.
  4083.  
  4084. G-SYNC DEVICE: A G-Sync Device refers to devices capable of Frame
  4085. lock/Genlock. This can be a graphics card (Quadro FX 3000G) or a stand alone
  4086. device (Quadro FX G-Sync). See "Supported Hardware" below.
  4087.  
  4088.  
  4089. 26B. SUPPORTED HARDWARE
  4090.  
  4091. Frame lock and genlock are supported for the following hardware:
  4092.  
  4093.    Card
  4094.    ----------------------------------------------------------------------
  4095.    Quadro FX 3000G
  4096.    Quadro FX G-Sync, used in conjunction with a Quadro FX 4400, Quadro FX
  4097.    4500, or Quadro FX 5500
  4098.    Quadro FX G-Sync II, used in conjunction with a Quadro FX 4600, or Quadro
  4099.    FX 5600
  4100.  
  4101.  
  4102.  
  4103. 26C. HARDWARE SETUP
  4104.  
  4105. Before you begin, you should check that your hardware has been properly
  4106. installed. If you are using the Quadro FX 3000G, the genlock/frame lock signal
  4107. processing hardware is located on the dual-slot card itself, and after
  4108. installing the card, no additional setup is necessary.
  4109.  
  4110. If you are using the Quadro FX G-Sync card in conjunction with a graphics
  4111. card, the following additional setup steps are required. These steps must be
  4112. performed when the system is off.
  4113.  
  4114.  1. On the Quadro FX G-Sync card, locate the fourteen-pin connector labeled
  4115.     "primary". If the associated ribbon cable is not already joined to this
  4116.     connector, do so now. If you plan to use frame lock or genlock in
  4117.     conjunction with SLI FrameRendering or Multi-GPU FrameRendering (see
  4118.     Chapter 25) or other multi-GPU configurations, you should connect the
  4119.     fourteen-pin connector labeled "secondary" to the second GPU. A section
  4120.     at the end of this appendix describes restrictions on such setups.
  4121.  
  4122.  2. Install the Quadro FX G-Sync card in any available slot. Note that the
  4123.     slot itself is only used for support, so even a known "bad" slot is
  4124.     acceptable. The slot must be close enough to the graphics card that the
  4125.     ribbon cable can reach.
  4126.  
  4127.  3. Connect the other end of the ribbon cable to the fourteen-pin connector
  4128.     on the graphics card.
  4129.  
  4130. You may now boot the system and begin the software setup of genlock and/or
  4131. frame lock. These instructions assume that you have already successfully
  4132. installed the NVIDIA Accelerated Linux Driver Set. If you have not done so,
  4133. see Chapter 4.
  4134.  
  4135.  
  4136. 26D. CONFIGURATION WITH NVIDIA-SETTINGS GUI
  4137.  
  4138. Frame lock and genlock are configured through the nvidia-settings utility. See
  4139. the 'nvidia-settings(1)' man page, and the nvidia-settings online help (click
  4140. the "Help" button in the lower right corner of the interface for per-page help
  4141. information).
  4142.  
  4143. From the nvidia-settings frame lock panel, you may control the addition of
  4144. G-Sync (and display) devices to the frame lock/genlock group, monitor the
  4145. status of that group, and enable/disable frame lock and genlock.
  4146.  
  4147. After the system has booted and X Windows has been started, run
  4148. nvidia-settings as
  4149.  
  4150.    % nvidia-settings
  4151.  
  4152. You may wish to start this utility before continuing, as we refer to it
  4153. frequently in the subsequent discussion.
  4154.  
  4155. The setup of genlock and frame lock are described separately. We then describe
  4156. the use of genlock and frame lock together.
  4157.  
  4158.  
  4159. 26E. GENLOCK SETUP
  4160.  
  4161. After the system has been booted, connect the external signal to the house
  4162. sync connector (the BNC connector) on either the graphics card or the G-Sync
  4163. card. There is a status LED next to the connector. A solid red LED indicates
  4164. that the hardware cannot detect the timing signal. A green LED indicates that
  4165. the hardware is detecting a timing signal. An occasional red flash is okay.
  4166. The G-Sync device (graphics card or G-Sync card) will need to be configured
  4167. correctly for the signal to be detected.
  4168.  
  4169. In the frame lock panel of the nvidia-settings interface, add the X Server
  4170. that contains the display and G-Sync devices that you would like to sync to
  4171. this external source by clicking the "Add Devices..." button. An X Server is
  4172. typically specified in the format "system:m", e.g.:
  4173.  
  4174.    mycomputer.domain.com:0
  4175.  
  4176. or
  4177.  
  4178.    localhost:0
  4179.  
  4180. After adding an X Server, rows will appear in the "G-Sync Devices" section on
  4181. the frame lock panel that displays relevant status information about the
  4182. G-Sync devices, GPUs attached to those G-Sync devices and the display devices
  4183. driven by those GPUs. In particular, the G-Sync rows will display the server
  4184. name and G-Sync device number along with "Receiving" LED, "Rate", "House" LED,
  4185. "Port0"/"Port1" Images, and "Delay" information. The GPU rows will display the
  4186. GPU product name information along with the GPU ID for the server. The Display
  4187. Device rows will show the display device name and device type along with
  4188. server/client checkboxes, refresh rate, "Timing" LED and "Stereo" LED.
  4189.  
  4190. Once the G-Sync and display devices have been added to the frame lock/genlock
  4191. group, a Server display device will need to be selected. This is done by
  4192. selecting the "Server" checkbox of the desired display device.
  4193.  
  4194. If you are using a G-Sync card, you must also click the "Use House Sync if
  4195. Present" checkbox. To enable synchronization of this G-Sync device to the
  4196. external source, click the "Enable Frame Lock" button. The display device(s)
  4197. may take a moment to stabilize. If it does not stabilize, you may have
  4198. selected a synchronization signal that the system cannot support. You should
  4199. disable synchronization by clicking the "Disable Frame Lock" button and check
  4200. the external sync signal.
  4201.  
  4202. Modifications to genlock settings (e.g., "Use House Sync if Present", "Add
  4203. Devices...") must be done while synchronization is disabled.
  4204.  
  4205.  
  4206. 26F. FRAME LOCK SETUP
  4207.  
  4208. Frame Lock is supported across an arbitrary number of Quadro FX 3000 or Quadro
  4209. FX G-Sync systems, although mixing the two in the same frame lock group is not
  4210. supported. Additionally, each system to be included in the frame lock group
  4211. must be configured with identical mode timings. See Chapter 19 for information
  4212. on mode timings.
  4213.  
  4214. Connect the systems through their RJ45 ports using standard CAT5 patch cables.
  4215. These ports are located on the frame lock card itself (either the Quadro FX
  4216. 3000 or the Quadro FX G-Sync card). DO NOT CONNECT A FRAME LOCK PORT TO AN
  4217. ETHERNET CARD OR HUB. DOING SO MAY PERMANENTLY DAMAGE THE HARDWARE. The
  4218. connections should be made in a daisy-chain fashion: each card has two RJ45
  4219. ports, call them 1 and 2. Connect port 1 of system A to port 2 of system B,
  4220. connect port 1 of system B to port 2 of system C, etc. Note that you will
  4221. always have two empty ports in your frame lock group.
  4222.  
  4223. The ports self-configure as inputs or outputs once frame lock is enabled. Each
  4224. port has a yellow and a green LED that reflect this state. A flashing yellow
  4225. LED indicates an output and a flashing green LED indicates an input. A solid
  4226. green LED indicates that the port has not yet configured.
  4227.  
  4228. In the frame lock panel of the nvidia-settings interface, add the X server
  4229. that contains the display devices that you would like to include in the frame
  4230. lock group by clicking the "Add Devices..." button (see the description for
  4231. adding display devices in the previous section on GENLOCK SETUP. Like the
  4232. genlock status indicators, the "Port0" and "Port1" columns in the table on the
  4233. frame lock panel contain indicators whose states mirror the states of the
  4234. physical LEDs on the RJ45 ports. Thus, you may monitor the status of these
  4235. ports from the software interface.
  4236.  
  4237. Any X Server can be added to the frame lock group, provided that
  4238.  
  4239.  1. The system supporting the X Server is configured to support frame lock
  4240.     and is connected via RJ45 cable to the other systems in the frame lock
  4241.     group.
  4242.  
  4243.  2. The system driving nvidia-settings can locate and has display privileges
  4244.     on the X server that is to be included for frame lock.
  4245.  
  4246. A system can gain display privileges on a remote system by executing
  4247.  
  4248.    % xhost +
  4249.  
  4250. on the remote system. See the xhost(1) man page for details. Typically, frame
  4251. lock is controlled through one of the systems that will be included in the
  4252. frame lock group. While this is not a requirement, note that nvidia-settings
  4253. will only display the frame lock panel when running on an X server that
  4254. supports frame lock.
  4255.  
  4256. To enable synchronization on these display devices, click the "Enable Frame
  4257. Lock" button. The screens may take a moment to stabilize. If they do not
  4258. stabilize, you may have selected mode timings that one or more of the systems
  4259. cannot support. In this case you should disable synchronization by clicking
  4260. the "Disable Frame Lock" button and refer to Chapter 19 for information on
  4261. mode timings.
  4262.  
  4263. Modifications to frame lock settings (e.g. "Add/Remove Devices...") must be
  4264. done while synchronization is disabled.
  4265.  
  4266.  
  4267. 26G. FRAME LOCK + GENLOCK
  4268.  
  4269. The use of frame lock and genlock together is a simple extension of the above
  4270. instructions for using them separately. You should first follow the
  4271. instructions for Frame Lock Setup, and then to one of the systems that will be
  4272. included in the frame lock group, attach an external sync source. In order to
  4273. sync the frame lock group to this single external source, you must select a
  4274. display device driven by the GPU connected to the G-Sync card (through the
  4275. primary connector) that is connected to the external source to be the signal
  4276. server for the group. This is done by selecting the checkbox labeled "Server"
  4277. of the tree on the frame lock panel in nvidia-settings. If you are using a
  4278. G-Sync based frame lock group, you must also select the "Use House Sync if
  4279. Present" checkbox. Enable synchronization by clicking the "Enable Frame Lock"
  4280. button. As with other frame lock/genlock controls, you must select the signal
  4281. server while synchronization is disabled.
  4282.  
  4283.  
  4284. 26H. CONFIGURATION WITH NVIDIA-SETTINGS COMMAND LINE
  4285.  
  4286. Frame Lock may also be configured through the nvidia-settings command line.
  4287. This method of configuring Frame Lock may be useful in a scripted environment
  4288. to automate the setup process. (Note that the examples listed below depend on
  4289. the actual hardware configuration and as such may not work as-is.)
  4290.  
  4291. To properly configure Frame Lock, the following steps should be completed:
  4292.  
  4293.  1. Make sure Frame Lock Sync is disabled on all GPUs.
  4294.  
  4295.  2. Make sure all display devices that are to be frame locked have the same
  4296.     refresh rate.
  4297.  
  4298.  3. Configure which (display/GPU) device should be the master.
  4299.  
  4300.  4. Configure house sync (if applicable).
  4301.  
  4302.  5. Configure the slave display devices.
  4303.  
  4304.  6. Enable frame lock sync on the master GPU.
  4305.  
  4306.  7. Enable frame lock sync on the slave GPUs.
  4307.  
  4308.  8. Toggle the test signal on the master GPU (for testing the hardware
  4309.     connectivity.)
  4310.  
  4311.  
  4312. For a full list of the nvidia-settings Frame Lock attributes, please see the
  4313. 'nvidia-settings(1)' man page. Examples:
  4314.  
  4315.  1. 1 System, 1 Frame Lock board, 1 GPU, and 1 display device syncing to the
  4316.     house signal:
  4317.    
  4318.       # - Make sure frame lock sync is disabled
  4319.       nvidia-settings -a [gpu:0]/FrameLockEnable=0
  4320.       nvidia-settings -q [gpu:0]/FrameLockEnable
  4321.    
  4322.       # - Query the enabled displays on the gpu
  4323.       nvidia-settings -q [gpu:0]/EnabledDisplays
  4324.    
  4325.       # - Check that the refresh rate is the one we want
  4326.       nvidia-settings -q [gpu:0]/RefreshRate
  4327.    
  4328.       # - Set the master display device to CRT-0.  The desired display
  4329.       #   device(s) to be set are passed in as a hexadecimal number
  4330.       #   in which specific bits denote which display devices to set.
  4331.       #   examples:
  4332.       #
  4333.       #   0x00000001 - CRT-0
  4334.       #   0x00000002 - CRT-1
  4335.       #   0x00000003 - CRT-0 and CRT-1
  4336.       #
  4337.       #   0x00000100 - TV-0
  4338.       #   0x00000200 - TV-1
  4339.       #
  4340.       #   0x00020000 - DFP-1
  4341.       #
  4342.       #   0x00010101 - CRT-0, TV-0 and DFP-0
  4343.       #
  4344.       #   0x000000FF - All CRTs
  4345.       #   0x0000FF00 - All TVs
  4346.       #   0x00FF0000 - All DFPs
  4347.       #
  4348.       #   Note that the following command:
  4349.       #
  4350.       #     nvidia-settings -q [gpu:0]/EnabledDisplays
  4351.       #
  4352.       #   will list the available displays on the given GPU.
  4353.    
  4354.       nvidia-settings -a [gpu:0]/FrameLockMaster=0x00000001
  4355.       nvidia-settings -q [gpu:0]/FrameLockMaster
  4356.    
  4357.       # - Enable use of house sync signal
  4358.       nvidia-settings -a [framelock:0]/FrameLockUseHouseSync=1
  4359.    
  4360.       # - Configure the house sync signal video mode
  4361.       nvidia-settings -a [framelock:0]/FrameLockVideoMode=0
  4362.    
  4363.       # - Set the slave display device to none (to avoid
  4364.       #   having unwanted display devices locked to the
  4365.       #   sync signal.)
  4366.       nvidia-settings -a [gpu:0]/FrameLockSlaves=0x00000000
  4367.       nvidia-settings -q [gpu:0]/FrameLockSlaves
  4368.    
  4369.       # - Enable framelocking
  4370.       nvidia-settings -a [gpu:0]/FrameLockEnable=1
  4371.    
  4372.       # - Toggle the test signal
  4373.       nvidia-settings -a [gpu:0]/FrameLockTestSignal=1
  4374.       nvidia-settings -a [gpu:0]/FrameLockTestSignal=0
  4375.    
  4376.    
  4377.  2. 2 Systems, each with 2 GPUs, 1 Frame Lock board and 1 display device per
  4378.     GPU syncing from the first system's first display device:
  4379.    
  4380.       # - Make sure frame lock sync is disabled
  4381.       nvidia-settings -a myserver:0[gpu:0]/FrameLockEnable=0
  4382.       nvidia-settings -a myserver:0[gpu:1]/FrameLockEnable=0
  4383.       nvidia-settings -a myslave1:0[gpu:0]/FrameLockEnable=0
  4384.       nvidia-settings -a myslave1:0[gpu:1]/FrameLockEnable=0
  4385.    
  4386.       # - Query the enabled displays on the GPUs
  4387.       nvidia-settings -q myserver:0[gpu:0]/EnabledDisplays
  4388.       nvidia-settings -q myserver:0[gpu:1]/EnabledDisplays
  4389.       nvidia-settings -q myslave1:0[gpu:0]/EnabledDisplays
  4390.       nvidia-settings -q myslave1:0[gpu:1]/EnabledDisplays
  4391.    
  4392.       # - Check the refresh rate is the same for all displays
  4393.       nvidia-settings -q myserver:0[gpu:0]/RefreshRate
  4394.       nvidia-settings -q myserver:0[gpu:1]/RefreshRate
  4395.       nvidia-settings -q myslave1:0[gpu:0]/RefreshRate
  4396.       nvidia-settings -q myslave1:0[gpu:1]/RefreshRate
  4397.    
  4398.       # - Make sure the display device we want as master is masterable
  4399.       nvidia-settings -q myserver:0[gpu:0]/FrameLockMasterable
  4400.    
  4401.       # - Set the master display device (CRT-0)
  4402.       nvidia-settings -a myserver:0[gpu:0]/FrameLockMaster=0x00000001
  4403.    
  4404.       # - Disable the house sync signal on the master device
  4405.       nvidia-settings -a myserver:0[framelock:0]/FrameLockUseHouseSync=0
  4406.    
  4407.       # - Set the slave display devices
  4408.       nvidia-settings -a myserver:0[gpu:1]/FrameLockSlaves=0x00000001
  4409.       nvidia-settings -a myslave1:0[gpu:0]/FrameLockSlaves=0x00000001
  4410.       nvidia-settings -a myslave1:0[gpu:1]/FrameLockSlaves=0x00000001
  4411.    
  4412.       # - Enable framelocking on server
  4413.       nvidia-settings -a myserver:0[gpu:0]/FrameLockEnable=1
  4414.    
  4415.       # - Enable framelocking on slave devices
  4416.       nvidia-settings -a myserver:0[gpu:1]/FrameLockEnable=1
  4417.       nvidia-settings -a myslave1:0[gpu:0]/FrameLockEnable=1
  4418.       nvidia-settings -a myslave1:0[gpu:1]/FrameLockEnable=1
  4419.    
  4420.       # - Toggle the test signal
  4421.       nvidia-settings -a myserver:0[gpu:0]/FrameLockTestSignal=1
  4422.       nvidia-settings -a myserver:0[gpu:0]/FrameLockTestSignal=0
  4423.    
  4424.    
  4425.  3. 1 System, 4 GPUs, 2 Frame Lock boards and 2 display devices per GPU
  4426.     syncing from the first GPU's display device:
  4427.    
  4428.       # - Make sure frame lock sync is disabled
  4429.       nvidia-settings -a [gpu:0]/FrameLockEnable=0
  4430.       nvidia-settings -a [gpu:1]/FrameLockEnable=0
  4431.       nvidia-settings -a [gpu:2]/FrameLockEnable=0
  4432.       nvidia-settings -a [gpu:3]/FrameLockEnable=0
  4433.    
  4434.       # - Query the enabled displays on the GPUs
  4435.       nvidia-settings -q [gpu:0]/EnabledDisplays
  4436.       nvidia-settings -q [gpu:1]/EnabledDisplays
  4437.       nvidia-settings -q [gpu:2]/EnabledDisplays
  4438.       nvidia-settings -q [gpu:3]/EnabledDisplays
  4439.    
  4440.       # - Check the refresh rate is the same for all displays
  4441.       nvidia-settings -q [gpu:0]/RefreshRate
  4442.       nvidia-settings -q [gpu:1]/RefreshRate
  4443.       nvidia-settings -q [gpu:2]/RefreshRate
  4444.       nvidia-settings -q [gpu:3]/RefreshRate
  4445.    
  4446.       # - Make sure the display device we want as master is masterable
  4447.       nvidia-settings -q myserver:0[gpu:0]/FrameLockMasterable
  4448.    
  4449.       # - Set the master display device (CRT-0)
  4450.       nvidia-settings -a [gpu:0]/FrameLockMaster=0x00000001
  4451.    
  4452.       # - Disable the house sync signal on the master device
  4453.       nvidia-settings -a [framelock:0]/FrameLockUseHouseSync=1
  4454.    
  4455.       # - Set the slave display devices
  4456.       nvidia-settings -a [gpu:0]/FrameLockSlaves=0x00000002 # CRT-1
  4457.       nvidia-settings -a [gpu:1]/FrameLockSlaves=0x00000003 # CRT-0 and CRT-1
  4458.       nvidia-settings -a [gpu:2]/FrameLockSlaves=0x00000003 # CRT-0 and CRT-1
  4459.       nvidia-settings -a [gpu:3]/FrameLockSlaves=0x00000003 # CRT-0 and CRT-1
  4460.    
  4461.       # - Enable framelocking on master GPU
  4462.       nvidia-settings -a [gpu:0]/FrameLockEnable=1
  4463.    
  4464.       # - Enable framelocking on slave devices
  4465.       nvidia-settings -a [gpu:1]/FrameLockEnable=1
  4466.       nvidia-settings -a [gpu:2]/FrameLockEnable=1
  4467.       nvidia-settings -a [gpu:3]/FrameLockEnable=1
  4468.    
  4469.       # - Toggle the test signal
  4470.       nvidia-settings -a [gpu:0]/FrameLockTestSignal=1
  4471.       nvidia-settings -a [gpu:0]/FrameLockTestSignal=0
  4472.    
  4473.    
  4474.  
  4475.  
  4476. 26I. LEVERAGING FRAME LOCK/GENLOCK IN OPENGL
  4477.  
  4478. With the GLX_NV_swap_group extension, OpenGL applications can be implemented
  4479. to join a group of applications within a system for local swap sync, and bind
  4480. the group to a barrier for swap sync across a frame lock group. A universal
  4481. frame counter is also provided to promote synchronization across applications.
  4482.  
  4483.  
  4484. 26J. FRAME LOCK RESTRICTIONS:
  4485.  
  4486. The following restrictions must be met for enabling frame lock:
  4487.  
  4488.  1. All display devices set as client in a frame lock group must have the
  4489.     same mode timings as the server (master) display device. If a House Sync
  4490.     signal is used (instead of internal timings), all client display devices
  4491.     must be set to have the same refresh rate as the incoming house sync
  4492.     signal.
  4493.  
  4494.  2. All X Screens (driving the selected client/server display devices) must
  4495.     have the same stereo setting. See Appendix B for instructions on how to
  4496.     set the stereo X option.
  4497.  
  4498.  3. The frame lock server (master) display device must be on a GPU on the
  4499.     primary connector to a G-Sync device.
  4500.  
  4501.  4. If connecting a single GPU to a G-Sync device, the primary connector must
  4502.     be used.
  4503.  
  4504.  5. In configurations with more than one display device per GPU, we recommend
  4505.     enabling frame lock on all display devices on those GPUs.
  4506.  
  4507.  6. VT-switching or mdoe-switching will disable frame lock on the display
  4508.     device. Note that the glXQueryFrameCountNV entry point (provided by the
  4509.     GLX_NV_swap_group extension) will only provide incrementing numbers while
  4510.     frame lock is enabled. Therefore, applications that use
  4511.     glXQueryFrameCountNV to control animation will appear to stop animating
  4512.     while frame lock is disabled.
  4513.  
  4514.  
  4515.  
  4516. 26K. SUPPORTED FRAME LOCK CONFIGURATIONS:
  4517.  
  4518. The following configurations are currently supported:
  4519.  
  4520.  1. Basic Frame Lock: Single GPU, Single X Screen, Single Display Device with
  4521.     or without OpenGL applications that make use of Quad-Buffered Stereo
  4522.     and/or the GLX_NV_swap_group extension.
  4523.  
  4524.  2. Frame Lock + TwinView: Single GPU, Single X Screen, Multiple Display
  4525.     Devices with or without OpenGL applications that make use of
  4526.     Quad-Buffered Stereo and/or the GLX_NV_swap_group extension.
  4527.  
  4528.  3. Frame Lock + Xinerama: 1 or more GPU(s), Multiple X Screens, Multiple
  4529.     Display Devices with or without OpenGL applications that make use of
  4530.     Quad-Buffered Stereo and/or the GLX_NV_swap_group extension.
  4531.  
  4532.  4. Frame Lock + TwinView + Xinerama: 1 or more GPU(s), Multiple X Screens,
  4533.     Multiple Display Devices with or without OpenGL applications that make
  4534.     use of Quad-Buffered Stereo and/or the GLX_NV_swap_group extension.
  4535.  
  4536.  5. Frame Lock + SLI SFR, AFR, or AA: 2 GPUs, Single X Screen, Single Display
  4537.     Device with either OpenGL applications that make use of Quad-Buffered
  4538.     Stereo or the GLX_NV_swap_group extension. Note that for Frame Lock + SLI
  4539.     Frame Rendering applications that make use of both Quad-Buffered Stereo
  4540.     and the GLX_NV_swap_group extension are not supported. Note that only
  4541.     2-GPU SLI configurations are currently supported.
  4542.  
  4543.  6. Frame Lock + Multi-GPU SFR, AFR, or AA: 2 GPUs, Single X Screen, Single
  4544.     Display Device with either OpenGL applications that make use of
  4545.     Quad-Buffered Stereo or the GLX_NV_swap_group extension. Note that for
  4546.     Frame Lock + Multi-GPU Frame Rendering applications that make use of both
  4547.     Quad-Buffered Stereo and the GLX_NV_swap_group extension are not
  4548.     supported.
  4549.  
  4550.  
  4551. ______________________________________________________________________________
  4552.  
  4553. Chapter 27. Configuring SDI Video Output
  4554. ______________________________________________________________________________
  4555.  
  4556. Broadcast, film, and video post production and digital cinema applications can
  4557. require Serial Digital (SDI) or High Definition Serial Digital (HD-SDI) video
  4558. output. SDI/HD-SDI is a digital video interface used for the transmission of
  4559. uncompressed video signals as well as packetized data. SDI is standardized in
  4560. ITU-R BT.656 and SMPTE 259M while HD-SDI is standardized in SMPTE 292M. SMPTE
  4561. 372M extends HD-SDI to define a dual-link configuration that uses a pair of
  4562. SMPTE 292M links to provide a 2.970 Gbit/sec interface. SMPTE 424M extends the
  4563. interface further to define a single 2.97 Gbit/sec serial data link.
  4564.  
  4565. SDI and HD-SDI video output is provided through the use of the NVIDIA driver
  4566. along with an NVIDIA SDI output daughter board. In addition to single- and
  4567. dual-link SDI/HD-SDI digital video output, frame lock and genlock
  4568. synchronization are provided in order to synchronize the outgoing video with
  4569. an external source signal (see Chapter 26 for details on these technologies).
  4570. This section describes the setup and use of the SDI video output.
  4571.  
  4572.  
  4573. 27A. HARDWARE SETUP
  4574.  
  4575. Before you begin, you should check that your hardware has been properly
  4576. installed. If you are using the Quadro FX 4000SDI, the SDI/HD-SDI hardware is
  4577. located on the dual-slot card itself, and after installing the card, no
  4578. additional setup is necessary. If you are using the Quadro FX 4500/5500SDI or
  4579. Quadro FX 4600/5600 SDI II, the following additional setup steps are required
  4580. inorder to connect the SDI daughter card to the graphics card. These steps
  4581. must be performed when the system is off.
  4582.  
  4583.  1. Insert the NVIDIA SDI Output card into any available expansion slot
  4584.     within six inches of the NVIDIA Quadro graphics card. Secure the card's
  4585.     bracket using the method provided by the chassis manufacturer (usually a
  4586.     thumb screw or an integrated latch).
  4587.  
  4588.  2. Connect one end of the 14-pin ribbon cable to the G-Sync connector on the
  4589.     NVIDIA Quadro graphics card, and the other end to the NVIDIA SDI output
  4590.     card.
  4591.  
  4592.  3. On Quadro FX 4500/5500SDI, connect the SMA-to-BNC cables by screwing the
  4593.     male SMA connectors onto the female SMA connectors on the NVIDIA SDI
  4594.     output card. On Quadro FX 4600/5600 SDI II, this step is not necessary:
  4595.     the SDI II has BNC connectors rather than SMA connectors.
  4596.  
  4597.  4. Connect the DVI-loopback connector by connecting one end of the DVI cable
  4598.     to the DVI connector on the NVIDIA SDI output card and the other end to
  4599.     the "north" DVI connector on the NVIDIA Quadro graphics card. The "north"
  4600.     DVI connector on the NVIDIA Quadro graphics card is the DVI connector
  4601.     that is the farthest from the graphics card PCI-E connection to the
  4602.     motherboard. The SDI output card will NOT function properly if this cable
  4603.     is connected to the "south" DVI connector.
  4604.  
  4605. Once the above installation is complete, you may boot the system and configure
  4606. the SDI video output using nvidia-settings. These instructions assume that you
  4607. have already successfully installed the NVIDIA Linux Accelerated Graphics
  4608. Driver. If you have not done so, see Chapter 4 for details.
  4609.  
  4610.  
  4611. 27B. CLONE MODE CONFIGURATION WITH  'nvidia-settings'
  4612.  
  4613. SDI video output is configured through the nvidia-settings utility. See the
  4614. 'nvidia-settings(1)' man page, and the nvidia-settings online help (click the
  4615. "Help" button in the lower right corner of the interface for per-page help
  4616. information).
  4617.  
  4618. After the system has booted and X Windows has been started, run
  4619. nvidia-settings as
  4620.  
  4621.    % nvidia-settings
  4622.  
  4623. When the NVIDIA X Server Settings page appears, follow the steps below to
  4624. configure the SDI video output.
  4625.  
  4626.  1. Click on the "Graphics to Video Out" tree item on the side menu. This
  4627.     will open the "Graphics to Video Out" page.
  4628.  
  4629.  2. Go to the "Synchronization Options" subpage and choose a synchronization
  4630.     method. From the "Sync Options" dropdown click the list arrow to the
  4631.     right and then click the method that you want to use to synchronize the
  4632.     SDI output.
  4633.    
  4634.         Sync Method      Description
  4635.         -------------    --------------------------------------------------
  4636.         Free Running     The SDI output will be synchronized with the
  4637.                          timing chosen from the SDI signal format list.
  4638.         Genlock          SDI output will be synchronized with the external
  4639.                          sync signal.
  4640.         Frame Lock       The SDI output will be synchronized with the
  4641.                          timing chosen from the SDI signal format list. In
  4642.                          this case, the list of available timings is
  4643.                          limited to those timings that can be synchronized
  4644.                          with the detected external sync signal.
  4645.    
  4646.    
  4647.     Note that on Quadro FX 4600/5600 SDI II, you must first choose the
  4648.     correct Sync Format before an incoming sync signal will be detected.
  4649.  
  4650.  3. From the top Graphics to Video Out page, choose the output video format
  4651.     that will control the video resolution, field rate, and SMPTE signaling
  4652.     standard for the outgoing video stream. From the "Clone Mode" dropdown
  4653.     box, click the "Video Format" arrow and then click the signal format that
  4654.     you would like to use. Note that only those resolutions that are smaller
  4655.     or equal to the desktop resolution will be available. Also, this list is
  4656.     pruned according to the sync option selected. If genlock synchronization
  4657.     is chosen, the output video format is automatically set to match the
  4658.     incoming video sync format and this drop down list will be grayed out
  4659.     preventing you from chosing another format. If frame lock synchronization
  4660.     has been selected, then only those modes that are compatible with the
  4661.     detected sync signal will be available.
  4662.  
  4663.  4. Choose the output data format from the "Output Data Format" dropdown
  4664.     list.
  4665.  
  4666.  5. Click the "Enable SDI Output" button to enable video output using the
  4667.     settings above. The status of the SDI output can be verified by examining
  4668.     the LED indicators in the "Graphics to SDI property" page banner.
  4669.  
  4670.  6. To subsequently stop SDI output, simply click on the button that now says
  4671.     "Disable SDI Output".
  4672.  
  4673.  7. In order to change any of the SDI output parameters such as the Output
  4674.     Video Format, Output Data Format as well as the Synchronization Delay, it
  4675.     is necessary to first disable the SDI output.
  4676.  
  4677.  
  4678.  
  4679. 27C. CONFIGURATION FOR TWINVIEW OR AS A SEPARATE X SCREEN
  4680.  
  4681. SDI video output can be configured through the nvidia-settings X Server
  4682. Display Configuration page, for use in TwinView or as a separate X screen. The
  4683. SDI video output can be configured as if it were a digital flat panel,
  4684. choosing the resolution, refresh rate, and position within the desktop.
  4685.  
  4686. Similarly, the SDI video output can be configured for use in TwinView or as a
  4687. separate X screen through the X configuration file. The supported SDI video
  4688. output modes can be requested by name anywhere a mode name can be used in the
  4689. X configuration file (either in the "Modes" line, or in the "MetaModes"
  4690. option). E.g.,
  4691.  
  4692.  
  4693. Option "MetaModes" "CRT-0:nvidia-auto-select, DFP-1:1280x720_60.00_smpte296"
  4694.    
  4695.  
  4696. The mode names are reported in the nvidia-settings Display Configuration page
  4697. when in advanced mode.
  4698.  
  4699. Note that SDI "Clone Mode" as configured through the Graphics to Video Out
  4700. page in nvidia-settings is mutually exclusive with using the SDI video output
  4701. in TwinView or as a separate X screen.
  4702.  
  4703. ______________________________________________________________________________
  4704.  
  4705. Chapter 28. Configuring Depth 30 Displays
  4706. ______________________________________________________________________________
  4707.  
  4708. This driver release supports X screens with screen depths of 30 bits per pixel
  4709. (10 bits per color component) on NVIDIA Quadro GPUs based on G80 and higher
  4710. chip architectures. This provides about 1 billion possible colors, allowing
  4711. for higher color precision and smoother gradients.
  4712.  
  4713. When displaying a depth 30 image on a digital flat panel, the color data will
  4714. be dithered to 8 or 6 bits per pixel, depending on the capabilities of the
  4715. flat panel. VGA outputs can display the full 10 bit range of colors.
  4716.  
  4717. To work reliably, depth 30 requires X.org 7.3 or higher.
  4718.  
  4719. NOTE: X servers starting with X.org 7.3 rely on a library called libpixman to
  4720. perform software rendering. As of this writing, the officially released
  4721. version of this library will crash when it encouters depth 30 drawables. To be
  4722. able to run X at this depth, you will need to download, compile, and install
  4723. the "wide-composite" development branch from the freedesktop.org pixman git
  4724. repository. Please see the freedesktop.org and git documentation for
  4725. instructions on how to download and compile development branches.
  4726.  
  4727. In addition to the above software requirements, many X applications and
  4728. toolkits do not understand depth 30 visuals as of this writing. Some programs
  4729. may work correctly, some may work but display incorrect colors, and some may
  4730. simply fail to run. In particular, many OpenGL applications request 8 bits of
  4731. alpha when searching for FBConfigs. Since depth 30 visuals have only 2 bits of
  4732. alpha, no suitable FBConfigs will be found and such applications will fail to
  4733. start.
  4734.  
  4735. ______________________________________________________________________________
  4736.  
  4737. Chapter 29. NVIDIA Contact Info and Additional Resources
  4738. ______________________________________________________________________________
  4739.  
  4740. There is an NVIDIA Linux Driver web forum. You can access it by going to
  4741. http://www.nvnews.net and following the "Forum" and "Linux Discussion Area"
  4742. links. This is the preferable tool for seeking help; users can post questions,
  4743. answer other users' questions, and search the archives of previous postings.
  4744.  
  4745. If all else fails, you can contact NVIDIA for support at:
  4746. linux-bugs@nvidia.com. But please, only send email to this address after you
  4747. have explored the Chapter 7 and Chapter 8 chapters of this document, and asked
  4748. for help on the nvnews.net web forum. When emailing linux-bugs@nvidia.com,
  4749. please include the 'nvidia-bug-report.log.gz' file generated by the
  4750. 'nvidia-bug-report.sh' script (which is installed as part of driver
  4751. installation).
  4752.  
  4753.  
  4754.  
  4755. Additional Resources
  4756.  
  4757. Linux OpenGL ABI
  4758.  
  4759.     http://oss.sgi.com/projects/ogl-sample/ABI/
  4760.  
  4761. The XFree86 Project
  4762.  
  4763.     http://www.xfree86.org/
  4764.  
  4765. XFree86 Video Timings HOWTO
  4766.  
  4767.     http://www.tldp.org/HOWTO/XFree86-Video-Timings-HOWTO/index.html
  4768.  
  4769. The X.Org Foundation
  4770.  
  4771.     http://www.x.org/
  4772.  
  4773. OpenGL
  4774.  
  4775.     http://www.opengl.org/
  4776.  
  4777.  
  4778. ______________________________________________________________________________
  4779.  
  4780. Chapter 30. Acknowledgements
  4781. ______________________________________________________________________________
  4782.  
  4783. 'nvidia-installer' was inspired by the 'loki_update' tool:
  4784. http://www.lokigames.com/development/loki_update.php3/
  4785.  
  4786. The FTP and HTTP support in 'nvidia-installer' is based upon 'snarf 7.0':
  4787. http://www.xach.com/snarf/
  4788.  
  4789. The self-extracting archive (aka '.run' file) is generated using
  4790. 'makeself.sh': http://www.megastep.org/makeself/
  4791.  
  4792. The driver splash screen is decoded using 'libpng':
  4793. http://libpng.org/pub/png/libpng.html
  4794.  
  4795. This NVIDIA Linux driver contains code from the int10 module of the X.Org
  4796. project.
  4797.  
  4798. The BSD implementations of the following compiler intrinsics are used for
  4799. better portability: __udivdi3, __umoddi3, __moddi3, __ucmpdi2, __cmpdi2,
  4800. __fixunssfdi, and __fixunsdfdi.
  4801.  
  4802. ______________________________________________________________________________
  4803.  
  4804. Appendix A. Supported NVIDIA GPU Products
  4805. ______________________________________________________________________________
  4806.  
  4807. For the most complete and accurate listing of supported GPUs, please see the
  4808. Supported Products List, available from the NVIDIA Linux x86 Graphics Driver
  4809. download page. Please go to http://www.nvidia.com/object/unix.html, follow the
  4810. Archive link under the Linux x86 heading, follow the link for the 173.14.39
  4811. driver, and then go to the Supported Products List.
  4812.  
  4813.  
  4814. A1. NVIDIA GEFORCE GPUS
  4815.  
  4816.  
  4817.    NVIDIA GPU product                                        Device PCI ID
  4818.    ------------------------------------------------------    ---------------
  4819.    GeForce 6800 Ultra                                        0x0040
  4820.    GeForce 6800                                              0x0041
  4821.    GeForce 6800 LE                                           0x0042
  4822.    GeForce 6800 XE                                           0x0043
  4823.    GeForce 6800 XT                                           0x0044
  4824.    GeForce 6800 GT                                           0x0045
  4825.    GeForce 6800 GT                                           0x0046
  4826.    GeForce 6800 GS                                           0x0047
  4827.    GeForce 6800 XT                                           0x0048
  4828.    GeForce 7800 GTX                                          0x0090
  4829.    GeForce 7800 GTX                                          0x0091
  4830.    GeForce 7800 GT                                           0x0092
  4831.    GeForce 7800 GS                                           0x0093
  4832.    GeForce 7800 SLI                                          0x0095
  4833.    GeForce Go 7800                                           0x0098
  4834.    GeForce Go 7800 GTX                                       0x0099
  4835.    GeForce 6800 GS                                           0x00C0
  4836.    GeForce 6800                                              0x00C1
  4837.    GeForce 6800 LE                                           0x00C2
  4838.    GeForce 6800 XT                                           0x00C3
  4839.    GeForce Go 6800                                           0x00C8
  4840.    GeForce Go 6800 Ultra                                     0x00C9
  4841.    GeForce 6800                                              0x00F0
  4842.    GeForce 6600 GT                                           0x00F1
  4843.    GeForce 6600                                              0x00F2
  4844.    GeForce 6200                                              0x00F3
  4845.    GeForce 6600 LE                                           0x00F4
  4846.    GeForce 7800 GS                                           0x00F5
  4847.    GeForce 6800 GS                                           0x00F6
  4848.    GeForce 6800 Ultra                                        0x00F9
  4849.    GeForce PCX 5750                                          0x00FA
  4850.    GeForce PCX 5900                                          0x00FB
  4851.    GeForce PCX 5300                                          0x00FC
  4852.    GeForce 6600 GT                                           0x0140
  4853.    GeForce 6600                                              0x0141
  4854.    GeForce 6600 LE                                           0x0142
  4855.    GeForce 6600 VE                                           0x0143
  4856.    GeForce Go 6600                                           0x0144
  4857.    GeForce 6610 XL                                           0x0145
  4858.    GeForce Go 6600 TE/6200 TE                                0x0146
  4859.    GeForce 6700 XL                                           0x0147
  4860.    GeForce Go 6600                                           0x0148
  4861.    GeForce Go 6600 GT                                        0x0149
  4862.    GeForce 6200                                              0x014F
  4863.    GeForce 6500                                              0x0160
  4864.    GeForce 6200 TurboCache(TM)                               0x0161
  4865.    GeForce 6200SE TurboCache(TM)                             0x0162
  4866.    GeForce 6200 LE                                           0x0163
  4867.    GeForce Go 6200                                           0x0164
  4868.    GeForce Go 6400                                           0x0166
  4869.    GeForce Go 6200                                           0x0167
  4870.    GeForce Go 6400                                           0x0168
  4871.    GeForce 6250                                              0x0169
  4872.    GeForce 7100 GS                                           0x016A
  4873.    GeForce 8800 GTX                                          0x0191
  4874.    GeForce 8800 GTS                                          0x0193
  4875.    GeForce 8800 Ultra                                        0x0194
  4876.    Tesla C870                                                0x0197
  4877.    GeForce 7350 LE                                           0x01D0
  4878.    GeForce 7300 LE                                           0x01D1
  4879.    GeForce 7300 SE/7200 GS                                   0x01D3
  4880.    GeForce Go 7200                                           0x01D6
  4881.    GeForce Go 7300                                           0x01D7
  4882.    GeForce Go 7400                                           0x01D8
  4883.    GeForce 7500 LE                                           0x01DD
  4884.    GeForce 7300 GS                                           0x01DF
  4885.    GeForce 6800                                              0x0211
  4886.    GeForce 6800 LE                                           0x0212
  4887.    GeForce 6800 GT                                           0x0215
  4888.    GeForce 6800 XT                                           0x0218
  4889.    GeForce 6200                                              0x0221
  4890.    GeForce 6200 A-LE                                         0x0222
  4891.    GeForce 6150                                              0x0240
  4892.    GeForce 6150 LE                                           0x0241
  4893.    GeForce 6100                                              0x0242
  4894.    GeForce Go 6150                                           0x0244
  4895.    GeForce Go 6100                                           0x0247
  4896.    GeForce 7900 GTX                                          0x0290
  4897.    GeForce 7900 GT/GTO                                       0x0291
  4898.    GeForce 7900 GS                                           0x0292
  4899.    GeForce 7950 GX2                                          0x0293
  4900.    GeForce 7950 GX2                                          0x0294
  4901.    GeForce 7950 GT                                           0x0295
  4902.    GeForce Go 7950 GTX                                       0x0297
  4903.    GeForce Go 7900 GS                                        0x0298
  4904.    GeForce Go 7900 GTX                                       0x0299
  4905.    GeForce 7600 GT                                           0x02E0
  4906.    GeForce 7600 GS                                           0x02E1
  4907.    GeForce 7900 GS                                           0x02E3
  4908.    GeForce 7950 GT                                           0x02E4
  4909.    GeForce FX 5800 Ultra                                     0x0301
  4910.    GeForce FX 5800                                           0x0302
  4911.    GeForce FX 5600 Ultra                                     0x0311
  4912.    GeForce FX 5600                                           0x0312
  4913.    GeForce FX 5600XT                                         0x0314
  4914.    GeForce FX Go5600                                         0x031A
  4915.    GeForce FX Go5650                                         0x031B
  4916.    GeForce FX 5200                                           0x0320
  4917.    GeForce FX 5200 Ultra                                     0x0321
  4918.    GeForce FX 5200                                           0x0322
  4919.    GeForce FX 5200LE                                         0x0323
  4920.    GeForce FX Go5200                                         0x0324
  4921.    GeForce FX Go5250                                         0x0325
  4922.    GeForce FX 5500                                           0x0326
  4923.    GeForce FX 5100                                           0x0327
  4924.    GeForce FX Go5200 32M/64M                                 0x0328
  4925.    GeForce FX Go53xx                                         0x032C
  4926.    GeForce FX Go5100                                         0x032D
  4927.    GeForce FX 5900 Ultra                                     0x0330
  4928.    GeForce FX 5900                                           0x0331
  4929.    GeForce FX 5900XT                                         0x0332
  4930.    GeForce FX 5950 Ultra                                     0x0333
  4931.    GeForce FX 5900ZT                                         0x0334
  4932.    GeForce FX 5700 Ultra                                     0x0341
  4933.    GeForce FX 5700                                           0x0342
  4934.    GeForce FX 5700LE                                         0x0343
  4935.    GeForce FX 5700VE                                         0x0344
  4936.    GeForce FX Go5700                                         0x0347
  4937.    GeForce FX Go5700                                         0x0348
  4938.    GeForce 7650 GS                                           0x0390
  4939.    GeForce 7600 GT                                           0x0391
  4940.    GeForce 7600 GS                                           0x0392
  4941.    GeForce 7300 GT                                           0x0393
  4942.    GeForce 7600 LE                                           0x0394
  4943.    GeForce 7300 GT                                           0x0395
  4944.    GeForce Go 7600                                           0x0398
  4945.    GeForce Go 7600 GT                                        0x0399
  4946.    GeForce 6150SE nForce 430                                 0x03D0
  4947.    GeForce 6100 nForce 405                                   0x03D1
  4948.    GeForce 6100 nForce 400                                   0x03D2
  4949.    GeForce 6100 nForce 420                                   0x03D5
  4950.    GeForce 8600 GTS                                          0x0400
  4951.    GeForce 8600 GT                                           0x0401
  4952.    GeForce 8600 GT                                           0x0402
  4953.    GeForce 8600 GS                                           0x0403
  4954.    GeForce 8400 GS                                           0x0404
  4955.    GeForce 9500M GS                                          0x0405
  4956.    GeForce 8600M GT                                          0x0407
  4957.    GeForce 9650M GS                                          0x0408
  4958.    GeForce 8700M GT                                          0x0409
  4959.    GeForce 8400 SE                                           0x0420
  4960.    GeForce 8500 GT                                           0x0421
  4961.    GeForce 8400 GS                                           0x0422
  4962.    GeForce 8300 GS                                           0x0423
  4963.    GeForce 8400 GS                                           0x0424
  4964.    GeForce 8600M GS                                          0x0425
  4965.    GeForce 8400M GT                                          0x0426
  4966.    GeForce 8400M GS                                          0x0427
  4967.    GeForce 8400M G                                           0x0428
  4968.    GeForce 9300M G                                           0x042E
  4969.    GeForce 7150M / nForce 630M                               0x0531
  4970.    GeForce 7000M / nForce 610M                               0x0533
  4971.    GeForce 7050 PV / NVIDIA nForce 630a                      0x053A
  4972.    GeForce 7050 PV / NVIDIA nForce 630a                      0x053B
  4973.    GeForce 7025 / NVIDIA nForce 630a                         0x053E
  4974.    GeForce 8800 GTS 512                                      0x0600
  4975.    GeForce 8800 GT                                           0x0602
  4976.    GeForce 9800 GX2                                          0x0604
  4977.    GeForce 8800 GS                                           0x0606
  4978.    GeForce 8800M GTS                                         0x0609
  4979.    GeForce 8800M GTX                                         0x060C
  4980.    GeForce 8800 GS                                           0x060D
  4981.    GeForce 9600 GSO                                          0x0610
  4982.    GeForce 8800 GT                                           0x0611
  4983.    GeForce 9800 GTX                                          0x0612
  4984.    GeForce 9600 GT                                           0x0622
  4985.    GeForce 9600M GT                                          0x0647
  4986.    GeForce 9600M GS                                          0x0648
  4987.    GeForce 9600M GT                                          0x0649
  4988.    GeForce 9500M G                                           0x064B
  4989.    GeForce 8400 GS                                           0x06E4
  4990.    GeForce 9300M GS                                          0x06E5
  4991.    GeForce 9200M GS                                          0x06E8
  4992.    GeForce 9300M GS                                          0x06E9
  4993.    GeForce 7150 / NVIDIA nForce 630i                         0x07E0
  4994.    GeForce 7100 / NVIDIA nForce 630i                         0x07E1
  4995.    GeForce 7050 / NVIDIA nForce 610i                         0x07E3
  4996.    GeForce 9100M G                                           0x0844
  4997.    GeForce 8300                                              0x0848
  4998.    GeForce 8200                                              0x0849
  4999.    nForce 730a                                               0x084A
  5000.    GeForce 8200                                              0x084B
  5001.    GeForce 8100 / nForce 720a                                0x084F
  5002.  
  5003.  
  5004.  
  5005. A2. NVIDIA QUADRO GPUS
  5006.  
  5007.  
  5008.    NVIDIA GPU product                                        Device PCI ID
  5009.    ------------------------------------------------------    ---------------
  5010.    Quadro FX 4000                                            0x004E
  5011.    Quadro FX 4500                                            0x009D
  5012.    Quadro FX Go1400                                          0x00CC
  5013.    Quadro FX 3450/4000 SDI                                   0x00CD
  5014.    Quadro FX 1400                                            0x00CE
  5015.    Quadro FX 4400/Quadro FX 3400                             0x00F8
  5016.    Quadro FX 330                                             0x00FC
  5017.    Quadro NVS 280 PCI-E/Quadro FX 330                        0x00FD
  5018.    Quadro FX 1300                                            0x00FE
  5019.    Quadro NVS 440                                            0x014A
  5020.    Quadro FX 540M                                            0x014C
  5021.    Quadro FX 550                                             0x014D
  5022.    Quadro FX 540                                             0x014E
  5023.    Quadro NVS 285                                            0x0165
  5024.    Quadro FX 5600                                            0x019D
  5025.    Quadro FX 4600                                            0x019E
  5026.    Quadro NVS 110M                                           0x01D7
  5027.    Quadro NVS 110M                                           0x01DA
  5028.    Quadro NVS 120M                                           0x01DB
  5029.    Quadro FX 350M                                            0x01DC
  5030.    Quadro FX 350                                             0x01DE
  5031.    Quadro NVS 210S / NVIDIA GeForce 6150LE                   0x0245
  5032.    Quadro FX 2500M                                           0x029A
  5033.    Quadro FX 1500M                                           0x029B
  5034.    Quadro FX 5500                                            0x029C
  5035.    Quadro FX 3500                                            0x029D
  5036.    Quadro FX 1500                                            0x029E
  5037.    Quadro FX 4500 X2                                         0x029F
  5038.    Quadro FX 2000                                            0x0308
  5039.    Quadro FX 1000                                            0x0309
  5040.    Quadro FX Go700                                           0x031C
  5041.    Quadro NVS 55/280 PCI                                     0x032A
  5042.    Quadro FX 500/FX 600                                      0x032B
  5043.    Quadro FX 3000                                            0x0338
  5044.    Quadro FX 700                                             0x033F
  5045.    Quadro FX Go1000                                          0x034C
  5046.    Quadro FX 1100                                            0x034E
  5047.    Quadro FX 560                                             0x039E
  5048.    Quadro FX 370                                             0x040A
  5049.    Quadro NVS 320M                                           0x040B
  5050.    Quadro FX 570M                                            0x040C
  5051.    Quadro FX 1600M                                           0x040D
  5052.    Quadro FX 570                                             0x040E
  5053.    Quadro FX 1700                                            0x040F
  5054.    Quadro NVS 140M                                           0x0429
  5055.    Quadro NVS 130M                                           0x042A
  5056.    Quadro NVS 135M                                           0x042B
  5057.    Quadro FX 360M                                            0x042D
  5058.    Quadro NVS 290                                            0x042F
  5059.    Quadro FX 3700                                            0x061A
  5060.    Quadro FX 3600M                                           0x061C
  5061.  
  5062.  
  5063. Below are the legacy GPUs that are no longer supported in the unified driver.
  5064. These GPUs will continue to be maintained through the special legacy NVIDIA
  5065. GPU driver releases.
  5066.  
  5067. The 96.43.xx driver supports the following set of GPUs:
  5068.  
  5069.  
  5070.    NVIDIA GPU product                    Device PCI ID
  5071.    ----------------------------------    ----------------------------------
  5072.    GeForce2 MX/MX 400                    0x0110
  5073.    GeForce2 MX 100/200                   0x0111
  5074.    GeForce2 Go                           0x0112
  5075.    Quadro2 MXR/EX/Go                     0x0113
  5076.    GeForce4 MX 460                       0x0170
  5077.    GeForce4 MX 440                       0x0171
  5078.    GeForce4 MX 420                       0x0172
  5079.    GeForce4 MX 440-SE                    0x0173
  5080.    GeForce4 440 Go                       0x0174
  5081.    GeForce4 420 Go                       0x0175
  5082.    GeForce4 420 Go 32M                   0x0176
  5083.    GeForce4 460 Go                       0x0177
  5084.    Quadro4 550 XGL                       0x0178
  5085.    GeForce4 440 Go 64M                   0x0179
  5086.    Quadro NVS 400                        0x017A
  5087.    Quadro4 500 GoGL                      0x017C
  5088.    GeForce4 410 Go 16M                   0x017D
  5089.    GeForce4 MX 440 with AGP8X            0x0181
  5090.    GeForce4 MX 440SE with AGP8X          0x0182
  5091.    GeForce4 MX 420 with AGP8X            0x0183
  5092.    GeForce4 MX 4000                      0x0185
  5093.    Quadro4 580 XGL                       0x0188
  5094.    Quadro NVS 280 SD                     0x018A
  5095.    Quadro4 380 XGL                       0x018B
  5096.    Quadro NVS 50 PCI                     0x018C
  5097.    GeForce2 Integrated GPU               0x01A0
  5098.    GeForce4 MX Integrated GPU            0x01F0
  5099.    GeForce3                              0x0200
  5100.    GeForce3 Ti 200                       0x0201
  5101.    GeForce3 Ti 500                       0x0202
  5102.    Quadro DCC                            0x0203
  5103.    GeForce4 Ti 4600                      0x0250
  5104.    GeForce4 Ti 4400                      0x0251
  5105.    GeForce4 Ti 4200                      0x0253
  5106.    Quadro4 900 XGL                       0x0258
  5107.    Quadro4 750 XGL                       0x0259
  5108.    Quadro4 700 XGL                       0x025B
  5109.    GeForce4 Ti 4800                      0x0280
  5110.    GeForce4 Ti 4200 with AGP8X           0x0281
  5111.    GeForce4 Ti 4800 SE                   0x0282
  5112.    GeForce4 4200 Go                      0x0286
  5113.    Quadro4 980 XGL                       0x0288
  5114.    Quadro4 780 XGL                       0x0289
  5115.    Quadro4 700 GoGL                      0x028C
  5116.  
  5117.  
  5118. The 71.86.xx driver supports the following set of GPUs:
  5119.  
  5120.  
  5121.    NVIDIA GPU product                    Device PCI ID
  5122.    ----------------------------------    ----------------------------------
  5123.    RIVA TNT                              0x0020
  5124.    RIVA TNT2/TNT2 Pro                    0x0028
  5125.    RIVA TNT2 Ultra                       0x0029
  5126.    Vanta/Vanta LT                        0x002C
  5127.    RIVA TNT2 Model 64/Model 64 Pro       0x002D
  5128.    Aladdin TNT2                          0x00A0
  5129.    GeForce 256                           0x0100
  5130.    GeForce DDR                           0x0101
  5131.    Quadro                                0x0103
  5132.    GeForce2 GTS/GeForce2 Pro             0x0150
  5133.    GeForce2 Ti                           0x0151
  5134.    GeForce2 Ultra                        0x0152
  5135.    Quadro2 Pro                           0x0153
  5136.  
  5137.  
  5138. ______________________________________________________________________________
  5139.  
  5140. Appendix B. X Config Options
  5141. ______________________________________________________________________________
  5142.  
  5143. The following driver options are supported by the NVIDIA X driver. They may be
  5144. specified either in the Screen or Device sections of the X config file.
  5145.  
  5146. X Config Options
  5147.  
  5148. Option "NvAGP" "integer"
  5149.  
  5150.    Configure AGP support. Integer argument can be one of:
  5151.    
  5152.        Value             Behavior
  5153.        --------------    ---------------------------------------------------
  5154.        0                 disable AGP
  5155.        1                 use NVIDIA internal AGP support, if possible
  5156.        2                 use AGPGART, if possible
  5157.        3                 use any AGP support (try AGPGART, then NVIDIA AGP)
  5158.    
  5159.    Note that NVIDIA internal AGP support cannot work if AGPGART is either
  5160.    statically compiled into your kernel or is built as a module and loaded
  5161.    into your kernel. See Chapter 12 for details. Default: 3.
  5162.  
  5163. Option "NoLogo" "boolean"
  5164.  
  5165.    Disable drawing of the NVIDIA logo splash screen at X startup. Default:
  5166.    the logo is drawn for screens with depth 24.
  5167.  
  5168. Option "LogoPath" "string"
  5169.  
  5170.    Sets the path to the PNG file to be used as the logo splash screen at X
  5171.    startup. If the PNG file specified has a bKGD (background color) chunk,
  5172.    then the screen is cleared to the color it specifies. Otherwise, the
  5173.    screen is cleared to black. The logo file must be owned by root and must
  5174.    not be writable by a non-root group. Note that a logo is only displayed
  5175.    for screens with depth 24. Default: The built-in NVIDIA logo is used.
  5176.  
  5177. Option "RenderAccel" "boolean"
  5178.  
  5179.    Enable or disable hardware acceleration of the RENDER extension. Default:
  5180.    hardware acceleration of the RENDER extension is enabled.
  5181.  
  5182. Option "NoRenderExtension" "boolean"
  5183.  
  5184.    Disable the RENDER extension. Other than recompiling it, the X server does
  5185.    not seem to have another way of disabling this. Fortunately, we can
  5186.    control this from the driver so we export this option. This is useful in
  5187.    depth 8 where RENDER would normally steal most of the default colormap.
  5188.    Default: RENDER is offered when possible.
  5189.  
  5190. Option "UBB" "boolean"
  5191.  
  5192.    Enable or disable the Unified Back Buffer on Quadro-based GPUs (Quadro4
  5193.    NVS excluded); see Chapter 20 for a description of UBB. This option has no
  5194.    effect on non-Quadro GPU products. Default: UBB is on for Quadro GPUs.
  5195.  
  5196. Option "NoFlip" "boolean"
  5197.  
  5198.    Disable OpenGL flipping; see Chapter 20 for a description. Default: OpenGL
  5199.    will swap by flipping when possible.
  5200.  
  5201. Option "Dac8Bit" "boolean"
  5202.  
  5203.    Most Quadro products by default use a 10-bit color look-up table (LUT);
  5204.    setting this option to TRUE forces these GPUs to use an 8-bit (LUT).
  5205.    Default: a 10-bit LUT is used, when available.
  5206.  
  5207. Option "Overlay" "boolean"
  5208.  
  5209.    Enables RGB workstation overlay visuals. This is only supported on Quadro
  5210.    GPUs (Quadro NVS GPUs excluded) in depth 24. This option causes the server
  5211.    to advertise the SERVER_OVERLAY_VISUALS root window property and GLX will
  5212.    report single- and double-buffered, Z-buffered 16-bit overlay visuals. The
  5213.    transparency key is pixel 0x0000 (hex). There is no gamma correction
  5214.    support in the overlay plane. This feature requires XFree86 version 4.1.0
  5215.    or newer, or the X.Org X server. When TwinView is enabled, or the X screen
  5216.    is either wider than 2046 pixels or taller than 2047, the overlay may be
  5217.    emulated with a substantial performance penalty. RGB workstation overlays
  5218.    are not supported when the Composite extension is enabled. Dynamic
  5219.    TwinView is disabled when Overlays are enabled. Default: off.
  5220.  
  5221.    UBB must be enabled when overlays are enabled (this is the default
  5222.    behavior).
  5223.  
  5224. Option "CIOverlay" "boolean"
  5225.  
  5226.    Enables Color Index workstation overlay visuals with identical
  5227.    restrictions to Option "Overlay" above. The server will offer visuals both
  5228.    with and without a transparency key. These are depth 8 PseudoColor
  5229.    visuals. Enabling Color Index overlays on X servers older than XFree86 4.3
  5230.    will force the RENDER extension to be disabled due to bugs in the RENDER
  5231.    extension in older X servers. Color Index workstation overlays are not
  5232.    supported when the Composite extension is enabled. Default: off.
  5233.  
  5234.    UBB must be enabled when overlays are enabled (this is the default
  5235.    behavior).
  5236.  
  5237. Option "TransparentIndex" "integer"
  5238.  
  5239.    When color index overlays are enabled, use this option to choose which
  5240.    pixel is used for the transparent pixel in visuals featuring transparent
  5241.    pixels. This value is clamped between 0 and 255 (Note: some applications
  5242.    such as Alias's Maya require this to be zero in order to work correctly).
  5243.    Default: 0.
  5244.  
  5245. Option "OverlayDefaultVisual" "boolean"
  5246.  
  5247.    When overlays are used, this option sets the default visual to an overlay
  5248.    visual thereby putting the root window in the overlay. This option is not
  5249.    recommended for RGB overlays. Default: off.
  5250.  
  5251. Option "EmulatedOverlaysTimerMs" "integer"
  5252.  
  5253.    Enables the use of a timer within the X server to perform the updates to
  5254.    the emulated overlay or CI overlay. This option can be used to improve the
  5255.    performance of the emulated or CI overlays by reducing the frequency of
  5256.    the updates. The value specified indicates the desired number of
  5257.    milliseconds between overlay updates. To disable the use of the timer
  5258.    either leave the option unset or set it to 0. Default: off.
  5259.  
  5260. Option "EmulatedOverlaysThreshold" "boolean"
  5261.  
  5262.    Enables the use of a threshold within the X server to perform the updates
  5263.    to the emulated overlay or CI overlay. The emulated or CI overlay updates
  5264.    can be defered but this threshold will limit the number of defered OpenGL
  5265.    updates allowed before the overlay is updated. This option can be used to
  5266.    trade off performance and animation quality. Default: on.
  5267.  
  5268. Option "EmulatedOverlaysThresholdValue" "integer"
  5269.  
  5270.    Controls the threshold used in updating the emulated or CI overlays. This
  5271.    is used in conjunction with the EmulatedOverlaysThreshold option to trade
  5272.    off performance and animation quality. Higher values for this option favor
  5273.    performance over quality. Setting low values of this option will not cause
  5274.    the overlay to be updated more often than the frequence specified by the
  5275.    EmulatedOverlaysTimerMs option. Default: 5.
  5276.  
  5277. Option "RandRRotation" "boolean"
  5278.  
  5279.    Enable rotation support for the XRandR extension. This allows use of the
  5280.    XRandR X server extension for configuring the screen orientation through
  5281.    rotation. This feature is supported using depth 24. This requires an X.Org
  5282.    X 6.8.1 or newer X server. This feature does not work with hardware
  5283.    overlays; emulated overlays will be used instead at a substantial
  5284.    performance penalty. See Chapter 17 for details. Default: off.
  5285.  
  5286. Option "Rotate" "string"
  5287.  
  5288.    Enable static rotation support. Unlike the RandRRotation option above,
  5289.    this option takes effect as soon as the X server is started and will work
  5290.    with older versions of X. This feature is supported using depth 24. This
  5291.    feature does not work with hardware overlays; emulated overlays will be
  5292.    used instead at a substantial performance penalty. This option is not
  5293.    compatible with the RandR extension. Valid rotations are "normal", "left",
  5294.    "inverted", and "right". Default: off.
  5295.  
  5296. Option "AllowDDCCI" "boolean"
  5297.  
  5298.    Enables DDC/CI support in the NV-CONTROL X extension. DDC/CI is a
  5299.    mechanism for communication between your computer and your display device.
  5300.    This can be used to set the values normally controlled through your
  5301.    display device's On Screen Display. See the DDC/CI NV-CONTROL attributes
  5302.    in 'NVCtrl.h' and functions in 'NVCtrlLib.h' in the 'nvidia-settings'
  5303.    source code. Default: off (DDC/CI is disabled).
  5304.  
  5305.    Note that support for DDC/CI within the NVIDIA X driver's NV-CONTROL
  5306.    extension is deprecated, and will be removed in a future release. Other
  5307.    mechanisms for DDC/CI, such as the kernel i2c subsystem on Linux, are
  5308.    preferred over NV-CONTROL's DDC/CI support.
  5309.  
  5310.    If you would prefer that the NVIDIA X driver's NV-CONTROL X extension not
  5311.    remove DDC/CI support, please make your concerns known my emailing
  5312.    linux-bugs@nvidia.com.
  5313.  
  5314. Option "SWCursor" "boolean"
  5315.  
  5316.    Enable or disable software rendering of the X cursor. Default: off.
  5317.  
  5318. Option "HWCursor" "boolean"
  5319.  
  5320.    Enable or disable hardware rendering of the X cursor. Default: on.
  5321.  
  5322. Option "CursorShadow" "boolean"
  5323.  
  5324.    Enable or disable use of a shadow with the hardware accelerated cursor;
  5325.    this is a black translucent replica of your cursor shape at a given offset
  5326.    from the real cursor. Default: off (no cursor shadow).
  5327.  
  5328. Option "CursorShadowAlpha" "integer"
  5329.  
  5330.    The alpha value to use for the cursor shadow; only applicable if
  5331.    CursorShadow is enabled. This value must be in the range [0, 255] -- 0 is
  5332.    completely transparent; 255 is completely opaque. Default: 64.
  5333.  
  5334. Option "CursorShadowXOffset" "integer"
  5335.  
  5336.    The offset, in pixels, that the shadow image will be shifted to the right
  5337.    from the real cursor image; only applicable if CursorShadow is enabled.
  5338.    This value must be in the range [0, 32]. Default: 4.
  5339.  
  5340. Option "CursorShadowYOffset" "integer"
  5341.  
  5342.    The offset, in pixels, that the shadow image will be shifted down from the
  5343.    real cursor image; only applicable if CursorShadow is enabled. This value
  5344.    must be in the range [0, 32]. Default: 2.
  5345.  
  5346. Option "ConnectedMonitor" "string"
  5347.  
  5348.    Allows you to override what the NVIDIA kernel module detects is connected
  5349.    to your graphics card. This may be useful, for example, if you use a KVM
  5350.    (keyboard, video, mouse) switch and you are switched away when X is
  5351.    started. In such a situation, the NVIDIA kernel module cannot detect which
  5352.    display devices are connected, and the NVIDIA X driver assumes you have a
  5353.    single CRT.
  5354.  
  5355.    Valid values for this option are "CRT" (cathode ray tube), "DFP" (digital
  5356.    flat panel), or "TV" (television); if using TwinView, this option may be a
  5357.    comma-separated list of display devices; e.g.: "CRT, CRT" or "CRT, DFP".
  5358.  
  5359.    It is generally recommended to not use this option, but instead use the
  5360.    "UseDisplayDevice" option.
  5361.  
  5362.    NOTE: anything attached to a 15 pin VGA connector is regarded by the
  5363.    driver as a CRT. "DFP" should only be used to refer to digital flat panels
  5364.    connected via a DVI port.
  5365.  
  5366.    Default: string is NULL (the NVIDIA driver will detect the connected
  5367.    display devices).
  5368.  
  5369. Option "UseDisplayDevice" "string"
  5370.  
  5371.    The "UseDisplayDevice" X configuration option is a list of one or more
  5372.    display devices, which limits the display devices the NVIDIA X driver will
  5373.    consider for an X screen. The display device names used in the option may
  5374.    be either specific (with a numeric suffix; e.g., "DFP-1") or general
  5375.    (without a numeric suffix; e.g., "DFP").
  5376.  
  5377.    When assigning display devices to X screens, the NVIDIA X driver walks
  5378.    through the list of all (not already assigned) display devices detected as
  5379.    connected. When the "UseDisplayDevice" X configuration option is
  5380.    specified, the X driver will only consider connected display devices which
  5381.    are also included in the "UseDisplayDevice" list. This can be thought of
  5382.    as a "mask" against the connected (and not already assigned) display
  5383.    devices.
  5384.  
  5385.    Note the subtle difference between this option and the "ConnectedMonitor"
  5386.    option: the "ConnectedMonitor" option overrides which display devices are
  5387.    actually detected, while the "UseDisplayDevice" option controls which of
  5388.    the detected display devices will be used on this X screen.
  5389.  
  5390.    Of the list of display devices considered for this X screen (either all
  5391.    connected display devices, or a subset limited by the "UseDisplayDevice"
  5392.    option), the NVIDIA X driver first looks at CRTs, then at DFPs, and
  5393.    finally at TVs. For example, if both a CRT and a DFP are connected, by
  5394.    default the X driver would assign the CRT to this X screen. However, by
  5395.    specifying:
  5396.    
  5397.        Option "UseDisplayDevice" "DFP"
  5398.    
  5399.    the X screen would use the DFP instead. Or, if CRT-0, DFP-0, and DFP-1 are
  5400.    connected and TwinView is enabled, the X driver would assign CRT-0 and
  5401.    DFP-0 to the X screen. However, by specifying:
  5402.    
  5403.        Option "UseDisplayDevice" "CRT-0, DFP-1"
  5404.    
  5405.    the X screen would use CRT-0 and DFP-1 instead.
  5406.  
  5407.    Additionally, the special value "none" can be specified for the
  5408.    "UseDisplayDevice" option. When this value is given, any programming of
  5409.    the display hardware is disabled. The NVIDIA driver will not perform any
  5410.    mode validation or modesetting for this X screen. This is intended for use
  5411.    in conjunction with CUDA or in remote graphics solutions such as VNC or
  5412.    Hewlett Packard's Remote Graphics Software (RGS). This functionality is
  5413.    only available on Quadro and Tesla GPUs.
  5414.  
  5415.    Note the following restrictions for setting the "UseDisplayDevice" to
  5416.    "none":
  5417.    
  5418.       o OpenGL SyncToVBlank will have no effect.
  5419.    
  5420.       o You must also explicitly specify the Virtual screen size for your X
  5421.         screen (see the xorg.conf(5x) or XF86Config(5x) manpages for the
  5422.         'Virtual' option, or the nvidia-xconfig(1) manpage for the
  5423.         '--virtual' commandline option); the Virtual screen size must be at
  5424.         least 304x200, and the width must be a multiple of 8.
  5425.    
  5426.       o None of Stereo, Overlay, CIOverlay, or SLI are allowed when
  5427.         "UseDisplayDevice" is set to "none".
  5428.    
  5429.    
  5430. Option "UseEdidFreqs" "boolean"
  5431.  
  5432.    This option controls whether the NVIDIA X driver will use the HorizSync
  5433.    and VertRefresh ranges given in a display device's EDID, if any. When
  5434.    UseEdidFreqs is set to True, EDID-provided range information will override
  5435.    the HorizSync and VertRefresh ranges specified in the Monitor section. If
  5436.    a display device does not provide an EDID, or the EDID does not specify an
  5437.    hsync or vrefresh range, then the X server will default to the HorizSync
  5438.    and VertRefresh ranges specified in the Monitor section of your X config
  5439.    file. These frequency ranges are used when validating modes for your
  5440.    display device.
  5441.  
  5442.    Default: True (EDID frequencies will be used)
  5443.  
  5444. Option "UseEDID" "boolean"
  5445.  
  5446.    By default, the NVIDIA X driver makes use of a display device's EDID, when
  5447.    available, during construction of its mode pool. The EDID is used as a
  5448.    source for possible modes, for valid frequency ranges, and for collecting
  5449.    data on the physical dimensions of the display device for computing the
  5450.    DPI (see Appendix E). However, if you wish to disable the driver's use of
  5451.    the EDID, you can set this option to False:
  5452.    
  5453.        Option "UseEDID" "FALSE"
  5454.    
  5455.    Note that, rather than globally disable all uses of the EDID, you can
  5456.    individually disable each particular use of the EDID; e.g.,
  5457.    
  5458.        Option "UseEDIDFreqs" "FALSE"
  5459.        Option "UseEDIDDpi" "FALSE"
  5460.        Option "ModeValidation" "NoEdidModes"
  5461.    
  5462.    Default: True (use EDID).
  5463.  
  5464. Option "IgnoreEDID" "boolean"
  5465.  
  5466.    This option is deprecated, and no longer affects behavior of the X driver.
  5467.    See the "UseEDID" option for details.
  5468.  
  5469. Option "NoDDC" "boolean"
  5470.  
  5471.    Synonym for "IgnoreEDID". This option is deprecated, and no longer affects
  5472.    behavior of the X driver. See the "UseEDID" option for details.
  5473.  
  5474. Option "UseInt10Module" "boolean"
  5475.  
  5476.    Enable use of the X Int10 module to soft-boot all secondary cards, rather
  5477.    than POSTing the cards through the NVIDIA kernel module. Default: off
  5478.    (POSTing is done through the NVIDIA kernel module).
  5479.  
  5480. Option "TwinView" "boolean"
  5481.  
  5482.    Enable or disable TwinView. See Chapter 13 for details. Default: off
  5483.    (TwinView is disabled).
  5484.  
  5485. Option "TwinViewOrientation" "string"
  5486.  
  5487.    Controls the relationship between the two display devices when using
  5488.    TwinView. Takes one of the following values: "RightOf" "LeftOf" "Above"
  5489.    "Below" "Clone". See Chapter 13 for details. Default: string is NULL.
  5490.  
  5491. Option "SecondMonitorHorizSync" "range(s)"
  5492.  
  5493.    This option is like the HorizSync entry in the Monitor section, but is for
  5494.    the second monitor when using TwinView. See Chapter 13 for details.
  5495.    Default: none.
  5496.  
  5497. Option "SecondMonitorVertRefresh" "range(s)"
  5498.  
  5499.    This option is like the VertRefresh entry in the Monitor section, but is
  5500.    for the second monitor when using TwinView. See Chapter 13 for details.
  5501.    Default: none.
  5502.  
  5503. Option "MetaModes" "string"
  5504.  
  5505.    This option describes the combination of modes to use on each monitor when
  5506.    using TwinView. See Chapter 13 for details. Default: string is NULL.
  5507.  
  5508. Option "NoTwinViewXineramaInfo" "boolean"
  5509.  
  5510.    When in TwinView, the NVIDIA X driver normally provides a Xinerama
  5511.    extension that X clients (such as window managers) can use to discover the
  5512.    current TwinView configuration, such as where each display device is
  5513.    positioned within the X screen. Some window mangers get confused by this
  5514.    information, so this option is provided to disable this behavior. Default:
  5515.    false (TwinView Xinerama information is provided).
  5516.  
  5517. Option "TwinViewXineramaInfoOrder" "string"
  5518.  
  5519.    When the NVIDIA X driver provides TwinViewXineramaInfo (see the
  5520.    NoTwinViewXineramaInfo X config option), it by default reports the
  5521.    currently enabled display devices in the order "CRT, DFP, TV". The
  5522.    TwinViewXineramaInfoOrder X config option can be used to override this
  5523.    order.
  5524.  
  5525.    The option string is a comma-separated list of display device names. The
  5526.    display device names can either be general (e.g, "CRT", which identifies
  5527.    all CRTs), or specific (e.g., "CRT-1", which identifies a particular CRT).
  5528.    Not all display devices need to be identified in the option string;
  5529.    display devices that are not listed will be implicitly appended to the end
  5530.    of the list, in their default order.
  5531.  
  5532.    Note that TwinViewXineramaInfoOrder tracks all display devices that could
  5533.    possibly be connected to the GPU, not just the ones that are currently
  5534.    enabled. When reporting the Xinerama information, the NVIDIA X driver
  5535.    walks through the display devices in the order specified, only reporting
  5536.    enabled display devices.
  5537.  
  5538.    Examples:
  5539.    
  5540.            "DFP"
  5541.            "TV, DFP"
  5542.            "DFP-1, DFP-0, TV, CRT"
  5543.    
  5544.    In the first example, any enabled DFPs would be reported first (any
  5545.    enabled CRTs or TVs would be reported afterwards). In the second example,
  5546.    any enabled TVs would be reported first, then any enabled DFPs (any
  5547.    enabled CRTs would be reported last). In the last example, if DFP-1 were
  5548.    enabled, it would be reported first, then DFP-0, then any enabled TVs, and
  5549.    then any enabled CRTs; finally, any other enabled DFPs would be reported.
  5550.  
  5551.    Default: "CRT, DFP, TV"
  5552.  
  5553. Option "TwinViewXineramaInfoOverride" "string"
  5554.  
  5555.    This option overrides the values reported by NVIDIA's TwinView Xinerama
  5556.    implementation. This disregards the actual display devices used by the X
  5557.    screen and any order specified in TwinViewXineramaInfoOrder.
  5558.  
  5559.    The option string is interpreted as a comma-separated list of regions,
  5560.    specified as '[width]x[height]+[xoffset]+[yoffset]'. The regions' sizes
  5561.    and offsets are not validated against the X screen size, but are directly
  5562.    reported to any Xinerama client.
  5563.  
  5564.    Examples:
  5565.    
  5566.            "1600x1200+0+0, 1600x1200+1600+0"
  5567.            "1024x768+0+0, 1024x768+1024+0, 1024x768+0+768, 1024x768+1024+768"
  5568.    
  5569.    
  5570. Option "TVStandard" "string"
  5571.  
  5572.    See Chapter 16 for details on configuring TV-out.
  5573.  
  5574. Option "TVOutFormat" "string"
  5575.  
  5576.    See Chapter 16 for details on configuring TV-out.
  5577.  
  5578. Option "TVOverScan" "Decimal value in the range 0.0 to 1.0"
  5579.  
  5580.    Valid values are in the range 0.0 through 1.0; See Chapter 16 for details
  5581.    on configuring TV-out.
  5582.  
  5583. Option "Stereo" "integer"
  5584.  
  5585.    Enable offering of quad-buffered stereo visuals on Quadro. Integer
  5586.    indicates the type of stereo equipment being used:
  5587.    
  5588.        Value             Equipment
  5589.        --------------    ---------------------------------------------------
  5590.        1                 DDC glasses. The sync signal is sent to the
  5591.                          glasses via the DDC signal to the monitor. These
  5592.                          usually involve a passthrough cable between the
  5593.                          monitor and the graphics card. This mode is not
  5594.                          available on G8xGL and higher GPUs.
  5595.        2                 "Blueline" glasses. These usually involve a
  5596.                          passthrough cable between the monitor and graphics
  5597.                          card. The glasses know which eye to display based
  5598.                          on the length of a blue line visible at the bottom
  5599.                          of the screen. When in this mode, the root window
  5600.                          dimensions are one pixel shorter in the Y
  5601.                          dimension than requested. This mode does not work
  5602.                          with virtual root window sizes larger than the
  5603.                          visible root window size (desktop panning). This
  5604.                          mode is not available on G8xGL and higher GPUs.
  5605.        3                 Onboard stereo support. This is usually only found
  5606.                          on professional cards. The glasses connect via a
  5607.                          DIN connector on the back of the graphics card.
  5608.        4                 TwinView clone mode stereo (aka "passive" stereo).
  5609.                          On graphics cards that support TwinView, the left
  5610.                          eye is displayed on the first display, and the
  5611.                          right eye is displayed on the second display. This
  5612.                          is normally used in conjunction with special
  5613.                          projectors to produce 2 polarized images which are
  5614.                          then viewed with polarized glasses. To use this
  5615.                          stereo mode, you must also configure TwinView in
  5616.                          clone mode with the same resolution, panning
  5617.                          offset, and panning domains on each display.
  5618.        5                 Vertical interlaced stereo mode, for use with
  5619.                          SeeReal Stereo Digital Flat Panels.
  5620.        6                 Color interleaved stereo mode, for use with
  5621.                          Sharp3D Stereo Digital Flat Panels.
  5622.    
  5623.    Stereo is only available on Quadro cards. Stereo options 1, 2, and 3 (aka
  5624.    "active" stereo) may be used with TwinView if all modes within each
  5625.    MetaMode have identical timing values. See Chapter 19 for suggestions on
  5626.    making sure the modes within your MetaModes are identical. The identical
  5627.    ModeLine requirement is not necessary for Stereo option 4 ("passive"
  5628.    stereo). Currently, stereo operation may be "quirky" on the original
  5629.    Quadro (NV10) GPU and left-right flipping may be erratic. We are trying to
  5630.    resolve this issue for a future release. Default: 0 (Stereo is not
  5631.    enabled).
  5632.  
  5633.    UBB must be enabled when stereo is enabled (this is the default behavior).
  5634.  
  5635.    Stereo options 1, 2, and 3 (aka "active" stereo) are not supported on
  5636.    digital flat panels.
  5637.  
  5638.    Multi-GPU cards (such as the Quadro FX 4500 X2) provide a single connector
  5639.    for onboard stereo support (option 3), which is tied to the bottommost
  5640.    GPU. In order to synchronize onboard stereo with the other GPU, you must
  5641.    use a G-Sync device (see Chapter 26 for details).
  5642.  
  5643. Option "AllowDFPStereo" "boolean"
  5644.  
  5645.    By default, the NVIDIA X driver performs a check which disables active
  5646.    stereo (stereo options 1, 2, and 3) if the X screen is driving a DFP. The
  5647.    "AllowDFPStereo" option bypasses this check.
  5648.  
  5649. Option "ForceStereoFlipping" "boolean"
  5650.  
  5651.    Stereo flipping is the process by which left and right eyes are displayed
  5652.    on alternating vertical refreshes. Normally, stereo flipping is only
  5653.    performed when a stereo drawable is visible. This option forces stereo
  5654.    flipping even when no stereo drawables are visible.
  5655.  
  5656.    This is to be used in conjunction with the "Stereo" option. If "Stereo" is
  5657.    0, the "ForceStereoFlipping" option has no effect. If otherwise, the
  5658.    "ForceStereoFlipping" option will force the behavior indicated by the
  5659.    "Stereo" option, even if no stereo drawables are visible. This option is
  5660.    useful in a multiple-screen environment in which a stereo application is
  5661.    run on a different screen than the stereo master.
  5662.  
  5663.    Possible values:
  5664.    
  5665.        Value             Behavior
  5666.        --------------    ---------------------------------------------------
  5667.        0                 Stereo flipping is not forced. The default
  5668.                          behavior as indicated by the "Stereo" option is
  5669.                          used.
  5670.        1                 Stereo flipping is forced. Stereo is running even
  5671.                          if no stereo drawables are visible. The stereo
  5672.                          mode depends on the value of the "Stereo" option.
  5673.    
  5674.    Default: 0 (Stereo flipping is not forced). Note that active stereo is not
  5675.    supported on digital flat panels.
  5676.  
  5677. Option "XineramaStereoFlipping" "boolean"
  5678.  
  5679.    By default, when using Stereo with Xinerama, all physical X screens having
  5680.    a visible stereo drawable will stereo flip. Use this option to allow only
  5681.    one physical X screen to stereo flip at a time.
  5682.  
  5683.    This is to be used in conjunction with the "Stereo" and "Xinerama"
  5684.    options. If "Stereo" is 0 or "Xinerama" is 0, the "XineramaStereoFlipping"
  5685.    option has no effect.
  5686.  
  5687.    If you wish to have all X screens stereo flip all the time, see the
  5688.    "ForceStereoFlipping" option.
  5689.  
  5690.    Possible values:
  5691.    
  5692.        Value             Behavior
  5693.        --------------    ---------------------------------------------------
  5694.        0                 Stereo flipping is enabled on one X screen at a
  5695.                          time. Stereo is enabled on the first X screen
  5696.                          having the stereo drawable.
  5697.        1                 Stereo flipping in enabled on all X screens.
  5698.    
  5699.    Default: 1 (Stereo flipping is enabled on all X screens).
  5700.  
  5701. Option "NoBandWidthTest" "boolean"
  5702.  
  5703.    As part of mode validation, the X driver tests if a given mode fits within
  5704.    the hardware's memory bandwidth constraints. This option disables this
  5705.    test. Default: false (the memory bandwidth test is performed).
  5706.  
  5707. Option "IgnoreDisplayDevices" "string"
  5708.  
  5709.    This option tells the NVIDIA kernel module to completely ignore the
  5710.    indicated classes of display devices when checking which display devices
  5711.    are connected. You may specify a comma-separated list containing any of
  5712.    "CRT", "DFP", and "TV". For example:
  5713.    
  5714.    Option "IgnoreDisplayDevices" "DFP, TV"
  5715.    
  5716.    will cause the NVIDIA driver to not attempt to detect if any digital flat
  5717.    panels or TVs are connected. This option is not normally necessary;
  5718.    however, some video BIOSes contain incorrect information about which
  5719.    display devices may be connected, or which i2c port should be used for
  5720.    detection. These errors can cause long delays in starting X. If you are
  5721.    experiencing such delays, you may be able to avoid this by telling the
  5722.    NVIDIA driver to ignore display devices which you know are not connected.
  5723.    NOTE: anything attached to a 15 pin VGA connector is regarded by the
  5724.    driver as a CRT. "DFP" should only be used to refer to digital flat panels
  5725.    connected via a DVI port.
  5726.  
  5727. Option "MultisampleCompatibility" "boolean"
  5728.  
  5729.    Enable or disable the use of separate front and back multisample buffers.
  5730.    Enabling this will consume more memory but is necessary for correct output
  5731.    when rendering to both the front and back buffers of a multisample or FSAA
  5732.    drawable. This option is necessary for correct operation of SoftImage XSI.
  5733.    Default: false (a single multisample buffer is shared between the front
  5734.    and back buffers).
  5735.  
  5736. Option "NoPowerConnectorCheck" "boolean"
  5737.  
  5738.    The NVIDIA X driver will abort X server initialization if it detects that
  5739.    a GPU that requires an external power connector does not have an external
  5740.    power connector plugged in. This option can be used to bypass this test.
  5741.    Default: false (the power connector test is performed).
  5742.  
  5743. Option "XvmcUsesTextures" "boolean"
  5744.  
  5745.    Forces XvMC to use the 3D engine for XvMCPutSurface requests rather than
  5746.    the video overlay. Default: false (video overlay is used when available).
  5747.  
  5748. Option "AllowGLXWithComposite" "boolean"
  5749.  
  5750.    Enables GLX even when the Composite X extension is loaded. ENABLE AT YOUR
  5751.    OWN RISK. OpenGL applications will not display correctly in many
  5752.    circumstances with this setting enabled.
  5753.  
  5754.    This option is intended for use on X.Org X servers older than X11R6.9.0.
  5755.    On X11R6.9.0 or newer X servers, the NVIDIA OpenGL implementation
  5756.    interacts properly by default with the Composite X extension and this
  5757.    option should not be needed. However, on X11R6.9.0 or newer X servers,
  5758.    support for GLX with Composite can be disabled by setting this option to
  5759.    False.
  5760.  
  5761.    Default: false (GLX is disabled when Composite is enabled on X servers
  5762.    older than X11R6.9.0).
  5763.  
  5764. Option "UseCompositeWrapper" "boolean"
  5765.  
  5766.    Enables the X server's "composite wrapper", which performs coordinate
  5767.    translations necessary for the Composite extension.
  5768.  
  5769.    Default: false (the NVIDIA X driver performs its own coordinate
  5770.    translation).
  5771.  
  5772. Option "AddARGBGLXVisuals" "boolean"
  5773.  
  5774.    Adds a 32-bit ARGB visual for each supported OpenGL configuration. This
  5775.    allows applications to use OpenGL to render with alpha transparency into
  5776.    32-bit windows and pixmaps. This option requires the Composite extension.
  5777.    Default: ARGB GLX visuals are enabled on X servers new enough to support
  5778.    them when the Composite extension is also enabled.
  5779.  
  5780. Option "DisableGLXRootClipping" "boolean"
  5781.  
  5782.    If enabled, no clipping will be performed on rendering done by OpenGL in
  5783.    the root window. This option is deprecated. It is needed by older versions
  5784.    of OpenGL-based composite managers that draw the contents of redirected
  5785.    windows directly into the root window using OpenGL. Most OpenGL-based
  5786.    composite managers have been updated to support the Composite Overlay
  5787.    Window, a feature introduced in Xorg release 7.1. Using the Composite
  5788.    Overlay Window is the preferred method for performing OpenGL-based
  5789.    compositing.
  5790.  
  5791. Option "DamageEvents" "boolean"
  5792.  
  5793.    Use OS-level events to efficiently notify X when a client has performed
  5794.    direct rendering to a window that needs to be composited. This will
  5795.    significantly improve performance and interactivity when using GLX
  5796.    applications with a composite manager running. It will also affect
  5797.    applications using GLX when rotation is enabled. This option is currently
  5798.    incompatible with SLI and Multi-GPU modes and will be disabled if either
  5799.    are used. Enabled by default.
  5800.  
  5801. Option "ExactModeTimingsDVI" "boolean"
  5802.  
  5803.    Forces the initialization of the X server with the exact timings specified
  5804.    in the ModeLine. Default: false (for DVI devices, the X server initializes
  5805.    with the closest mode in the EDID list).
  5806.  
  5807. Option "Coolbits" "integer"
  5808.  
  5809.    Enables various unsupported features, such as support for GPU clock
  5810.    manipulation in the NV-CONTROL X extension. This option accepts a bit mask
  5811.    of features to enable.
  5812.  
  5813.    When "1" (Bit 0) is set in the "Coolbits" option value, the
  5814.    nvidia-settings utility will contain a page labeled "Clock Frequencies"
  5815.    through which clock settings can be manipulated. "Coolbits" is only
  5816.    available on GeForce FX, Quadro FX and newer desktop GPUs. On GeForce FX
  5817.    and newer mobile GPUs, limited clock manipulation support is available
  5818.    when "1" is set in the "Coolbits" option value: clocks can be lowered
  5819.    relative to the default settings; overclocking is not supported due to the
  5820.    thermal constraints of notebook designs.
  5821.  
  5822.    WARNING: this may cause system damage and void warranties. This utility
  5823.    can run your computer system out of the manufacturer's design
  5824.    specifications, including, but not limited to: higher system voltages,
  5825.    above normal temperatures, excessive frequencies, and changes to BIOS that
  5826.    may corrupt the BIOS. Your computer's operating system may hang and result
  5827.    in data loss or corrupted images. Depending on the manufacturer of your
  5828.    computer system, the computer system, hardware and software warranties may
  5829.    be voided, and you may not receive any further manufacturer support.
  5830.    NVIDIA does not provide customer service support for the Coolbits option.
  5831.    It is for these reasons that absolutely no warranty or guarantee is either
  5832.    express or implied. Before enabling and using, you should determine the
  5833.    suitability of the utility for your intended use, and you shall assume all
  5834.    responsibility in connection therewith.
  5835.  
  5836.    When "2" (Bit 1) is set in the "Coolbits" option value, the NVIDIA driver
  5837.    will attempt to initialize SLI when using GPUs with different amounts of
  5838.    video memory.
  5839.  
  5840.    The default for this option is 0 (unsupported features are disabled).
  5841.  
  5842. Option "MultiGPU" "string"
  5843.  
  5844.    This option controls the configuration of Multi-GPU rendering in supported
  5845.    configurations.
  5846.    
  5847.        Value                               Behavior
  5848.        --------------------------------    --------------------------------
  5849.        0, no, off, false, Single           Use only a single GPU when
  5850.                                            rendering
  5851.        1, yes, on, true, Auto              Enable Multi-GPU and allow the
  5852.                                            driver to automatically select
  5853.                                            the appropriate rendering mode.
  5854.        AFR                                 Enable Multi-GPU and use the
  5855.                                            Alternate Frame Rendering mode.
  5856.        SFR                                 Enable Multi-GPU and use the
  5857.                                            Split Frame Rendering mode.
  5858.        AA                                  Enable Multi-GPU and use
  5859.                                            antialiasing. Use this in
  5860.                                            conjunction with full scene
  5861.                                            antialiasing to improve visual
  5862.                                            quality.
  5863.    
  5864.    
  5865. Option "SLI" "string"
  5866.  
  5867.    This option controls the configuration of SLI rendering in supported
  5868.    configurations.
  5869.    
  5870.        Value                               Behavior
  5871.        --------------------------------    --------------------------------
  5872.        0, no, off, false, Single           Use only a single GPU when
  5873.                                            rendering
  5874.        1, yes, on, true, Auto              Enable SLI and allow the driver
  5875.                                            to automatically select the
  5876.                                            appropriate rendering mode.
  5877.        AFR                                 Enable SLI and use the Alternate
  5878.                                            Frame Rendering mode.
  5879.        SFR                                 Enable SLI and use the Split
  5880.                                            Frame Rendering mode.
  5881.        AA                                  Enable SLI and use SLI
  5882.                                            Antialiasing. Use this in
  5883.                                            conjunction with full scene
  5884.                                            antialiasing to improve visual
  5885.                                            quality.
  5886.        AFRofAA                             Enable SLI and use SLI Alternate
  5887.                                            Frame Rendering of Antialiasing
  5888.                                            mode. Use this in conjunction
  5889.                                            with full scene antialiasing to
  5890.                                            improve visual quality. This
  5891.                                            option is only valid for SLI
  5892.                                            configurations with 4 GPUs.
  5893.    
  5894.    
  5895. Option "TripleBuffer" "boolean"
  5896.  
  5897.    Enable or disable the use of triple buffering. If this option is enabled,
  5898.    OpenGL windows that sync to vblank and are double-buffered will be given a
  5899.    third buffer. This decreases the time an application stalls while waiting
  5900.    for vblank events, but increases latency slightly (delay between user
  5901.    input and displayed result).
  5902.  
  5903. Option "DPI" "string"
  5904.  
  5905.    This option specifies the Dots Per Inch for the X screen; for example:
  5906.    
  5907.        Option "DPI" "75 x 85"
  5908.    
  5909.    will set the horizontal DPI to 75 and the vertical DPI to 85. By default,
  5910.    the X driver will compute the DPI of the X screen from the EDID of any
  5911.    connected display devices. See Appendix E for details. Default: string is
  5912.    NULL (disabled).
  5913.  
  5914. Option "UseEdidDpi" "string"
  5915.  
  5916.    By default, the NVIDIA X driver computes the DPI of an X screen based on
  5917.    the physical size of the display device, as reported in the EDID, and the
  5918.    size in pixels of the first mode to be used on the display device. If
  5919.    multiple display devices are used by the X screen, then the NVIDIA X
  5920.    screen will choose which display device to use. This option can be used to
  5921.    specify which display device to use. The string argument can be a display
  5922.    device name, such as:
  5923.    
  5924.        Option "UseEdidDpi" "DFP-0"
  5925.    
  5926.    or the argument can be "FALSE" to disable use of EDID-based DPI
  5927.    calculations:
  5928.    
  5929.        Option "UseEdidDpi" "FALSE"
  5930.    
  5931.    See Appendix E for details. Default: string is NULL (the driver computes
  5932.    the DPI from the EDID of a display device and selects the display device).
  5933.  
  5934. Option "ConstantDPI" "boolean"
  5935.  
  5936.    By default on X.Org 6.9 or newer X servers, the NVIDIA X driver recomputes
  5937.    the size in millimeters of the X screen whenever the size in pixels of the
  5938.    X screen is changed using XRandR, such that the DPI remains constant.
  5939.  
  5940.    This behavior can be disabled (which means that the size in millimeters
  5941.    will not change when the size in pixels of the X screen changes) by
  5942.    setting the "ConstantDPI" option to "FALSE"; e.g.,
  5943.    
  5944.        Option "ConstantDPI" "FALSE"
  5945.    
  5946.    ConstantDPI defaults to True.
  5947.  
  5948.    On X servers older than X.Org 6.9, the NVIDIA X driver cannot change the
  5949.    size in millimeters of the X screen. Therefore the DPI of the X screen
  5950.    will change when XRandR changes the size in pixels of the X screen. The
  5951.    driver will behave as if ConstantDPI was forced to FALSE.
  5952.  
  5953. Option "CustomEDID" "string"
  5954.  
  5955.    This option forces the X driver to use the EDID specified in a file rather
  5956.    than the display's EDID. You may specify a semicolon separated list of
  5957.    display names and filename pairs. The display name is any of "CRT-0",
  5958.    "CRT-1", "DFP-0", "DFP-1", "TV-0", "TV-1". The file contains a raw EDID
  5959.    (e.g., a file generated by nvidia-settings).
  5960.  
  5961.    For example:
  5962.    
  5963.        Option "CustomEDID" "CRT-0:/tmp/edid1.bin; DFP-0:/tmp/edid2.bin"
  5964.    
  5965.    will assign the EDID from the file /tmp/edid1.bin to the display device
  5966.    CRT-0, and the EDID from the file /tmp/edid2.bin to the display device
  5967.    DFP-0. Note that a display device name must always be specified even if
  5968.    only one EDID is specified.
  5969.  
  5970. Option "ModeValidation" "string"
  5971.  
  5972.    This option provides fine-grained control over each stage of the mode
  5973.    validation pipeline, disabling individual mode validation checks. This
  5974.    option should only very rarely be used.
  5975.  
  5976.    The option string is a semicolon-separated list of comma-separated lists
  5977.    of mode validation arguments. Each list of mode validation arguments can
  5978.    optionally be prepended with a display device name.
  5979.    
  5980.        "<dpy-0>: <tok>, <tok>; <dpy-1>: <tok>, <tok>, <tok>; ..."
  5981.    
  5982.    
  5983.    Possible arguments:
  5984.    
  5985.       o "AllowNon60HzDFPModes": some lower quality TMDS encoders are only
  5986.         rated to drive DFPs at 60Hz; the driver will determine when only 60Hz
  5987.         DFP modes are allowed. This argument disables this stage of the mode
  5988.         validation pipeline.
  5989.    
  5990.       o "NoMaxPClkCheck": each mode has a pixel clock; this pixel clock is
  5991.         validated against the maximum pixel clock of the hardware (for a DFP,
  5992.         this is the maximum pixel clock of the TMDS encoder, for a CRT, this
  5993.         is the maximum pixel clock of the DAC). This argument disables the
  5994.         maximum pixel clock checking stage of the mode validation pipeline.
  5995.    
  5996.       o "NoEdidMaxPClkCheck": a display device's EDID can specify the maximum
  5997.         pixel clock that the display device supports; a mode's pixel clock is
  5998.         validated against this pixel clock maximum. This argument disables
  5999.         this stage of the mode validation pipeline.
  6000.    
  6001.       o "AllowInterlacedModes": interlaced modes are not supported on all
  6002.         NVIDIA GPUs; the driver will discard interlaced modes on GPUs where
  6003.         interlaced modes are not supported; this argument disables this stage
  6004.         of the mode validation pipeline.
  6005.    
  6006.       o "NoMaxSizeCheck": each NVIDIA GPU has a maximum resolution that it
  6007.         can drive; this argument disables this stage of the mode validation
  6008.         pipeline.
  6009.    
  6010.       o "NoHorizSyncCheck": a mode's horizontal sync is validated against the
  6011.         range of valid horizontal sync values; this argument disables this
  6012.         stage of the mode validation pipeline.
  6013.    
  6014.       o "NoVertRefreshCheck": a mode's vertical refresh rate is validated
  6015.         against the range of valid vertical refresh rate values; this
  6016.         argument disables this stage of the mode validation pipeline.
  6017.    
  6018.       o "NoWidthAlignmentCheck": the alignment of a mode's visible width is
  6019.         validated against the capabilities of the GPU; normally, a mode's
  6020.         visible width must be a multiple of 8. This argument disables this
  6021.         stage of the mode validation pipeline.
  6022.    
  6023.       o "NoDFPNativeResolutionCheck": when validating for a DFP, a mode's
  6024.         size is validated against the native resolution of the DFP; this
  6025.         argument disables this stage of the mode validation pipeline.
  6026.    
  6027.       o "NoVirtualSizeCheck": if the X configuration file requests a specific
  6028.         virtual screen size, a mode cannot be larger than that virtual size;
  6029.         this argument disables this stage of the mode validation pipeline.
  6030.    
  6031.       o "NoVesaModes": when constructing the mode pool for a display device,
  6032.         the X driver uses a built-in list of VESA modes as one of the mode
  6033.         sources; this argument disables use of these built-in VESA modes.
  6034.    
  6035.       o "NoEdidModes": when constructing the mode pool for a display device,
  6036.         the X driver uses any modes listed in the display device's EDID as
  6037.         one of the mode sources; this argument disables use of EDID-specified
  6038.         modes.
  6039.    
  6040.       o "NoXServerModes": when constructing the mode pool for a display
  6041.         device, the X driver uses the built-in modes provided by the core
  6042.         XFree86/Xorg X server as one of the mode sources; this argument
  6043.         disables use of these modes. Note that this argument does not disable
  6044.         custom ModeLines specified in the X config file; see the
  6045.         "NoCustomModes" argument for that.
  6046.    
  6047.       o "NoCustomModes": when constructing the mode pool for a display
  6048.         device, the X driver uses custom ModeLines specified in the X config
  6049.         file (through the "Mode" or "ModeLine" entries in the Monitor
  6050.         Section) as one of the mode sources; this argument disables use of
  6051.         these modes.
  6052.    
  6053.       o "NoPredefinedModes": when constructing the mode pool for a display
  6054.         device, the X driver uses additional modes predefined by the NVIDIA X
  6055.         driver; this argument disables use of these modes.
  6056.    
  6057.       o "NoUserModes": additional modes can be added to the mode pool
  6058.         dynamically, using the NV-CONTROL X extension; this argument
  6059.         prohibits user-specified modes via the NV-CONTROL X extension.
  6060.    
  6061.       o "NoExtendedGpuCapabilitiesCheck": allow mode timings that may exceed
  6062.         the GPU's extended capability checks.
  6063.    
  6064.       o "ObeyEdidContradictions": an EDID may contradict itself by listing a
  6065.         mode as supported, but the mode may exceed an EDID-specified valid
  6066.         frequency range (HorizSync, VertRefresh, or maximum pixel clock).
  6067.         Normally, the NVIDIA X driver prints a warning in this scenario, but
  6068.         does not invalidate an EDID-specified mode just because it exceeds an
  6069.         EDID-specified valid frequency range. However, the
  6070.         "ObeyEdidContradictions" argument instructs the NVIDIA X driver to
  6071.         invalidate these modes.
  6072.    
  6073.       o "NoTotalSizeCheck": allow modes in which the invididual visible or
  6074.         sync pulse timings exceed the total raster size.
  6075.    
  6076.       o "DoubleScanPriority": on GPUs older than G80, doublescan modes are
  6077.         sorted before non-doublescan modes of the same resolution for
  6078.         purposes of modepool sorting; but on G80 and later GPUs, doublescan
  6079.         modes are sorted after non-doublescan modes of the same resolution.
  6080.         This token inverts that priority (i.e., doublescan modes will be
  6081.         sorted after on pre-G80 GPUs, and sorted before on G80 and later
  6082.         GPUs).
  6083.    
  6084.       o "NoDualLinkDVICheck": for mode timings used on dual link DVI DFPs,
  6085.         the driver must perform additional checks to ensure that the correct
  6086.         pixels are sent on the correct link. For some of these checks, the
  6087.         driver will invalidate the mode timings; for other checks, the driver
  6088.         will implicitly modify the mode timings to meet the GPU's dual link
  6089.         DVI requirements. This token disables this dual link DVI checking.
  6090.    
  6091.    
  6092.    Examples:
  6093.    
  6094.        Option "ModeValidation" "NoMaxPClkCheck"
  6095.    
  6096.    disable the maximum pixel clock check when validating modes on all display
  6097.    devices.
  6098.    
  6099.        Option "ModeValidation" "CRT-0: NoEdidModes, NoMaxPClkCheck; DFP-0:
  6100.     NoVesaModes"
  6101.    
  6102.    do not use EDID modes and do not perform the maximum pixel clock check on
  6103.    CRT-0, and do not use VESA modes on DFP-0.
  6104.  
  6105. Option "UseEvents" "boolean"
  6106.  
  6107.    Enables the use of system events in some cases when the X driver is
  6108.    waiting for the hardware. The X driver can briefly spin through a tight
  6109.    loop when waiting for the hardware. With this option the X driver instead
  6110.    sets an event handler and waits for the hardware through the 'poll()'
  6111.    system call. Default: the use of the events is disabled.
  6112.  
  6113. Option "FlatPanelProperties" "string"
  6114.  
  6115.    This option requests particular properties for all or a subset of the
  6116.    connected flat panels.
  6117.  
  6118.    The option string is a semicolon-separated list of comma-separated
  6119.    property=value pairs. Each list of property=value pairs can optionally be
  6120.    prepended with a flat panel name.
  6121.    
  6122.        "<DFP-0>: <property=value>, <property=value>; <DFP-1>:
  6123.     <property=value>; ..."
  6124.    
  6125.    
  6126.    Recognized properties:
  6127.    
  6128.       o "Scaling": controls the flat panel scaling mode; possible values are:
  6129.         'Default' (the driver will use whichever scaling state is current),
  6130.         'Native' (the driver will use the flat panel's scaler, if possible),
  6131.         'Scaled' (the driver will use the NVIDIA GPU's scaler, if possible),
  6132.         'Centered' (the driver will center the image, if possible), and
  6133.         'aspect-scaled' (the X driver will scale with the NVIDIA GPU's
  6134.         scaler, but keep the aspect ratio correct).
  6135.    
  6136.       o "Dithering": controls the flat panel dithering mode; possible values
  6137.         are: 'Default' (the driver will decide when to dither), 'Enabled'
  6138.         (the driver will always dither, if possible), and 'Disabled' (the
  6139.         driver will never dither).
  6140.    
  6141.    
  6142.    Examples:
  6143.    
  6144.        Option "FlatPanelProperties" "Scaling = Centered"
  6145.    
  6146.    set the flat panel scaling mode to centered on all flat panels.
  6147.    
  6148.        Option "FlatPanelProperties" "DFP-0: Scaling = Centered; DFP-1:
  6149.     Scaling = Scaled, Dithering = Enabled"
  6150.    
  6151.    set DFP-0's scaling mode to centered, set DFP-1's scaling mode to scaled
  6152.    and its dithering mode to enabled.
  6153.  
  6154. Option "ProbeAllGpus" "boolean"
  6155.  
  6156.    When the NVIDIA X driver initializes, it probes all GPUs in the system,
  6157.    even if no X screens are configured on them. This is done so that the X
  6158.    driver can report information about all the system's GPUs through the
  6159.    NV-CONTROL X extension. This option can be set to FALSE to disable this
  6160.    behavior, such that only GPUs with X screens configured on them will be
  6161.    probed. Default: all GPUs in the system are probed.
  6162.  
  6163. Option "DynamicTwinView" "boolean"
  6164.  
  6165.    Enable or disable support for dynamically configuring TwinView on this X
  6166.    screen. When DynamicTwinView is enabled (the default), the refresh rate of
  6167.    a mode (reported through XF86VidMode or XRandR) does not correctly report
  6168.    the refresh rate, but instead is a unique number such that each MetaMode
  6169.    has a different value. This is to guarantee that MetaModes can be uniquely
  6170.    identified by XRandR.
  6171.  
  6172.    When DynamicTwinView is disabled, the refresh rate reported through XRandR
  6173.    will be accurate, but NV-CONTROL clients such as nvidia-settings will not
  6174.    be able to dynamically manipulate the X screen's MetaModes. TwinView can
  6175.    still be configured from the X config file when DynamicTwinView is
  6176.    disabled.
  6177.  
  6178.    Default: DynamicTwinView is enabled.
  6179.  
  6180. Option "IncludeImplicitMetaModes" "boolean"
  6181.  
  6182.    When the X server starts, a mode pool is created per display device,
  6183.    containing all the mode timings that the NVIDIA X driver determined to be
  6184.    valid for the display device. However, the only MetaModes that are made
  6185.    available to the X server are the ones explicitly requested in the X
  6186.    configuration file.
  6187.  
  6188.    It is convenient for fullscreen applications to be able to change between
  6189.    the modes in the mode pool, even if a given target mode was not explicitly
  6190.    requested in the X configuration file.
  6191.  
  6192.    To facilitate this, the NVIDIA X driver will, if only one display device
  6193.    is in use when the X server starts, implicitly add MetaModes for all modes
  6194.    in the display device's mode pool. This makes all the modes in the mode
  6195.    pool available to full screen applications that use the XF86VidMode or
  6196.    XRandR X extensions.
  6197.  
  6198.    To prevent this behavior, and only add MetaModes that are explicitly
  6199.    requested in the X configuration file, set this option to FALSE.
  6200.  
  6201.    Default: IncludeImplicitMetaModes is enabled.
  6202.  
  6203. Option "AllowIndirectPixmaps" "boolean"
  6204.  
  6205.    Some graphics cards have more video memory than can be mapped at once by
  6206.    the CPU (generally only 256 MB of video memory can be CPU-mapped). On
  6207.    graphics cards based on G80 and higher with such a memory configuration,
  6208.    this option allows the driver to place more pixmaps in video memory which
  6209.    will improve hardware rendering performance but will slow down software
  6210.    rendering. On some systems, up to 768 megabytes of virtual address space
  6211.    will be reserved in the X server for indirect pixmap access. This virtual
  6212.    memory does not consume any physical resources.
  6213.  
  6214.    Default: on (indirect pixmaps will be used, when available).
  6215.  
  6216. Option "OnDemandVBlankInterrupts" "boolean"
  6217.  
  6218.    Normally, VBlank interrupts are generated on every vertical refresh of
  6219.    every display device connected to the GPU(s) installed in a given system.
  6220.    This experimental option enables on-demand VBlank control, allowing the
  6221.    driver to enable VBlank interrupt generation only when it is required.
  6222.    This can help conserve power.
  6223.  
  6224.    Default: off (on-demand VBlank control is disabled).
  6225.  
  6226. Option "PixmapCacheSize" "size"
  6227.  
  6228.    This option controls how much video memory is reserved for pixmap
  6229.    allocations. When the option is specified, "size" specifies the number of
  6230.    pixels to be used for each of the 8, 16, and 32 bit per pixel pixmap
  6231.    caches. Reserving this memory improves performance when pixmaps are
  6232.    created and destroyed rapidly, but prevents this memory from being used by
  6233.    OpenGL. When this cache is disabled or space in the cache is exhausted,
  6234.    the driver will still allocate pixmaps in video memory but pixmap creation
  6235.    and deletion performance will not be improved.
  6236.  
  6237.    This option may be removed in a future driver release after improvements
  6238.    to the pixmap cache make it obsolete.
  6239.  
  6240.    Example: "Option "PixmapCacheSize" "200000"" will allocate approximately
  6241.    200,000 pixels for each of the pixmap caches.
  6242.  
  6243.    Default: off (no memory is reserved specifically for pixmaps).
  6244.  
  6245. Option "LoadKernelModule" "boolean"
  6246.  
  6247.    Normally, the NVIDIA Linux X driver module will attempt to load the NVIDIA
  6248.    Linux kernel module. Set this option to "off" to disable automatic loading
  6249.    of the NVIDIA kernel module by the NVIDIA X driver. Default: on (the
  6250.    driver loads the kernel module).
  6251.  
  6252. Option "ConnectToAcpid" "boolean"
  6253.  
  6254.    The ACPI daemon (acpid) receives information about ACPI events like
  6255.    AC/Battery power, docking, etc. acpid will deliver these events to the
  6256.    NVIDIA X driver via a UNIX domain socket connection. By default, the
  6257.    NVIDIA X driver will attempt to connect to acpid to receive these events.
  6258.    Set this option to "off" to prevent the NVIDIA X driver from connecting to
  6259.    acpid. Default: on (the NVIDIA X driver will attempt to connect to acpid).
  6260.  
  6261. Option "AcpidSocketPath" "string"
  6262.  
  6263.    The NVIDIA X driver attempts to connect to the ACPI daemon (acpid) via a
  6264.    UNIX domain socket. The default path to this socket is
  6265.    "/var/run/acpid.socket". Set this option to specify an alternate path to
  6266.    acpid's socket. Default: "/var/run/acpid.socket".
  6267.  
  6268. Option "EnableACPIHotkeys" "boolean"
  6269.  
  6270.    The NVIDIA Linux X driver can detect mobile display change hotkey events
  6271.    either through ACPI or by periodically checking the GPU hardware state.
  6272.  
  6273.    While checking the GPU hardware state is generally sufficient to detect
  6274.    display change hotkey events, ACPI hotkey event delivery is preferable.
  6275.    However, X servers prior to X.Org xserver-1.2.0 have a bug that cause the
  6276.    X server to crash when the X server receives an ACPI hotkey event
  6277.    (freedesktop.org bug 8776). The NVIDIA Linux X driver will key off the X
  6278.    server ABI version to determine if the X server in use has this bug (X
  6279.    servers with ABI 1.1 or later do not).
  6280.  
  6281.    Since some X servers may have an earlier ABI but have a patch to fix the
  6282.    bug, the "EnableACPIHotkeys" option can be specified to override the
  6283.    NVIDIA X driver's default decision to enable or disable ACPI display
  6284.    change hotkey events.
  6285.  
  6286.    When running on a mobile system, search for "ACPI display change hotkey
  6287.     events" in your X log to see the NVIDIA X driver's decision.
  6288.  
  6289.    Default: the NVIDIA X driver will decide whether to enable ACPI display
  6290.    change hotkey events based on the X server ABI.
  6291.  
  6292.  
  6293. ______________________________________________________________________________
  6294.  
  6295. Appendix C. Display Device Names
  6296. ______________________________________________________________________________
  6297.  
  6298. A "display device" refers to some piece of hardware capable of displaying an
  6299. image. There are three categories of display devices: analog displays (i.e.,
  6300. CRTs), digital displays (i.e., digital flat panels (DFPs)), and televisions.
  6301. Note that analog flat panels are considered the same as analog CRTs by the
  6302. NVIDIA Linux driver.
  6303.  
  6304. A "display device name" is a string description that uniquely identifies a
  6305. display device; it follows the format "<type>-<number>", for example: "CRT-0",
  6306. "CRT-1", "DFP-0", or "TV-0". Note that the number indicates how the display
  6307. device connector is wired on the graphics card, and has nothing to do with how
  6308. many of that kind of display device are present. This means, for example, that
  6309. you may have a "CRT-1", even if you do not have a "CRT-0". To determine which
  6310. display devices are currently connected, you may check your X log file for a
  6311. line similar to the following:
  6312.  
  6313.    (II) NVIDIA(0): Connected display device(s): CRT-0, DFP-0
  6314.  
  6315. Display device names can be used in the MetaMode, HorizSync, and VertRefresh X
  6316. config options to indicate which display device a setting should be applied
  6317. to. For example:
  6318.  
  6319.    Option "MetaModes"   "CRT-0: 1600x1200,  DFP-0: 1024x768"
  6320.    Option "HorizSync"   "CRT-0: 50-110;     DFP-0: 40-70"
  6321.    Option "VertRefresh" "CRT-0: 60-120;     DFP-0: 60"
  6322.  
  6323. Specifying the display device name in these options is not required; if
  6324. display device names are not specified, then the driver attempts to infer
  6325. which display device a setting applies to. In the case of MetaModes, for
  6326. example, the first mode listed is applied to the "first" display device, and
  6327. the second mode listed is applied to the "second" display device.
  6328. Unfortunately, it is often unclear which display device is the "first" or
  6329. "second". That is why specifying the display device name is preferable.
  6330.  
  6331. When specifying display device names, you may also omit the number part of the
  6332. name, though this is only useful if you only have one of that type of display
  6333. device. For example, if you have one CRT and one DFP connected, you may
  6334. reference them in the MetaMode string as follows:
  6335.  
  6336.    Option "MetaModes"   "CRT: 1600x1200,  DFP: 1024x768"
  6337.  
  6338.  
  6339. ______________________________________________________________________________
  6340.  
  6341. Appendix D. GLX Support
  6342. ______________________________________________________________________________
  6343.  
  6344. This release supports GLX 1.4.
  6345.  
  6346. Additionally, the following GLX extensions are supported on appropriate GPUs:
  6347.  
  6348.   o GLX_EXT_visual_info
  6349.  
  6350.   o GLX_EXT_visual_rating
  6351.  
  6352.   o GLX_SGIX_fbconfig
  6353.  
  6354.   o GLX_SGIX_pbuffer
  6355.  
  6356.   o GLX_ARB_get_proc_address
  6357.  
  6358.   o GLX_SGI_video_sync
  6359.  
  6360.   o GLX_SGI_swap_control
  6361.  
  6362.   o GLX_ARB_multisample
  6363.  
  6364.   o GLX_NV_float_buffer
  6365.  
  6366.   o GLX_ARB_fbconfig_float
  6367.  
  6368.   o GLX_NV_swap_group
  6369.  
  6370.   o GLX_NV_video_out
  6371.  
  6372.   o GLX_EXT_texture_from_pixmap
  6373.  
  6374. For a description of these extensions, see the OpenGL extension registry at
  6375. http://www.opengl.org/registry/
  6376.  
  6377. Some of the above extensions exist as part of core GLX 1.4 functionality,
  6378. however, they are also exported as extensions for backwards compatibility.
  6379.  
  6380. ______________________________________________________________________________
  6381.  
  6382. Appendix E. Dots Per Inch
  6383. ______________________________________________________________________________
  6384.  
  6385. DPI (Dots Per Inch), also known as PPI (Pixels Per Inch), is a property of an
  6386. X screen that describes the physical size of pixels. Some X applications, such
  6387. as xterm, can use the DPI of an X screen to determine how large (in pixels) to
  6388. draw an object in order for that object to be displayed at the desired
  6389. physical size on the display device.
  6390.  
  6391. The DPI of an X screen is computed by dividing the size of the X screen in
  6392. pixels by the size of the X screen in inches:
  6393.  
  6394.    DPI = SizeInPixels / SizeInInches
  6395.  
  6396. Since the X screen stores its physical size in millimeters rather than inches
  6397. (1 inch = 25.4 millimeters):
  6398.  
  6399.    DPI = (SizeInPixels * 25.4) / SizeInMillimeters
  6400.  
  6401. The NVIDIA X driver reports the size of the X screen in pixels and in
  6402. millimeters. On X.Org 6.9 or newer, when the XRandR extension resizes the X
  6403. screen in pixels, the NVIDIA X driver computes a new size in millimeters for
  6404. the X screen, to maintain a constant DPI (see the "Physical Size" column of
  6405. the `xrandr -q` output as an example). This is done because a changing DPI can
  6406. cause interaction problems for some applications. To disable this behavior,
  6407. and instead keep the same millimeter size for the X screen (and therefore have
  6408. a changing DPI), set the ConstantDPI option to FALSE (see Appendix B for
  6409. details).
  6410.  
  6411. You can query the DPI of your X screen by running:
  6412.  
  6413.  
  6414.    % xdpyinfo | grep -B1 dot
  6415.  
  6416.  
  6417. which should generate output like this:
  6418.  
  6419.  
  6420.    dimensions:    1280x1024 pixels (382x302 millimeters)
  6421.    resolution:    85x86 dots per inch
  6422.  
  6423.  
  6424.  
  6425. The NVIDIA X driver performs several steps during X screen initialization to
  6426. determine the DPI of each X screen:
  6427.  
  6428.  
  6429.   o If the display device provides an EDID, and the EDID contains information
  6430.     about the physical size of the display device, that is used to compute
  6431.     the DPI, along with the size in pixels of the first mode to be used on
  6432.     the display device. If multiple display devices are used by this X
  6433.     screen, then the NVIDIA X screen will choose which display device to use.
  6434.     You can override this with the "UseEdidDpi" X configuration option: you
  6435.     can specify a particular display device to use; e.g.:
  6436.    
  6437.         Option "UseEdidDpi" "DFP-1"
  6438.    
  6439.     or disable EDID-computed DPI by setting this option to false:
  6440.    
  6441.         Option "UseEdidDpi" "FALSE"
  6442.    
  6443.     EDID-based DPI computation is enabled by default when an EDID is
  6444.     available.
  6445.  
  6446.   o If the "-dpi" commandline option to the X server is specified, that is
  6447.     used to set the DPI (see `X -h` for details). This will override the
  6448.     "UseEdidDpi" option.
  6449.  
  6450.   o If the "DPI" X configuration option is specified (see Appendix B for
  6451.     details), that will be used to set the DPI. This will override the
  6452.     "UseEdidDpi" option.
  6453.  
  6454.   o If none of the above are available, then the "DisplaySize" X config file
  6455.     Monitor section information will be used to determine the DPI, if
  6456.     provided; see the xorg.conf or XF86Config man pages for details.
  6457.  
  6458.   o If none of the above are available, the DPI defaults to 75x75.
  6459.  
  6460.  
  6461. You can find how the NVIDIA X driver determined the DPI by looking in your X
  6462. log file. There will be a line that looks something like the following:
  6463.  
  6464.    (--) NVIDIA(0): DPI set to (101, 101); computed from "UseEdidDpi" X config
  6465. option
  6466.  
  6467.  
  6468. Note that the physical size of the X screen, as reported through `xdpyinfo` is
  6469. computed based on the DPI and the size of the X screen in pixels.
  6470.  
  6471. The DPI of an X screen can be confusing when TwinView is enabled: with
  6472. TwinView, multiple display devices (possibly with different DPIs) display
  6473. portions of the same X screen, yet DPI can only be advertised from the X
  6474. server to the X application with X screen granularity. Solutions for this
  6475. include:
  6476.  
  6477.  
  6478.   o Use separate X screens, rather than TwinView; see Chapter 15 for details.
  6479.  
  6480.   o Experiment with different DPI settings to find a DPI that is suitable for
  6481.     both display devices.
  6482.  
  6483.  
  6484. ______________________________________________________________________________
  6485.  
  6486. Appendix F. i2c Bus Support
  6487. ______________________________________________________________________________
  6488.  
  6489. The NVIDIA Linux kernel module now includes i2c (also called I-squared-c,
  6490. Inter-IC Communications, or IIC) functionality that allows the NVIDIA Linux
  6491. kernel module to export i2c ports found on board NVIDIA cards to the Linux
  6492. kernel. This allows i2c devices on-board the NVIDIA graphics card, as well as
  6493. devices connected to the VGA and/or DVI ports, to be accessed from kernel
  6494. modules or userspace programs in a manner consistent with other i2c ports
  6495. exported by the Linux kernel through the i2c framework.
  6496.  
  6497. You must have i2c support compiled into the kernel, or as a module, and X must
  6498. be running. The i2c framework is available for both 2.4 and 2.6 series
  6499. kernels. Linux kernel documentation covers the kernel and userspace /dev APIs,
  6500. which you must use to access NVIDIA i2c ports.
  6501.  
  6502. NVIDIA has noted that in some distibutions, i2c support is enabled. However,
  6503. the Linux kernel module i2c-core.o (2.4) or i2c-core.ko (2.6), which provides
  6504. the export infrastructure, was not shipped. In this case, you will need to
  6505. build the i2c support module. For directions on how to build and install your
  6506. kernel's i2c support, refer to your distribution's documentation for
  6507. configuring, building, and installing the kernel and associated modules.
  6508.  
  6509. For further information regarding the Linux kernel i2c framework, refer to the
  6510. documentation for your kernel, located at .../Documentation/i2c/ within the
  6511. kernel source tree.
  6512.  
  6513. The following functionality is currently supported:
  6514.  
  6515.  
  6516.  I2C_FUNC_I2C
  6517.  I2C_FUNC_SMBUS_QUICK
  6518.  I2C_FUNC_SMBUS_BYTE
  6519.  I2C_FUNC_SMBUS_BYTE_DATA
  6520.  I2C_FUNC_SMBUS_WORD_DATA
  6521.  
  6522.  
  6523.  
  6524. ______________________________________________________________________________
  6525.  
  6526. Appendix G. XvMC Support
  6527. ______________________________________________________________________________
  6528.  
  6529. This release includes support for the XVideo Motion Compensation (XvMC)
  6530. version 1.0 API on GeForce 5 series, GeForce 6 series and GeForce 7 series
  6531. addin cards, as well as motherboard chipsets with integrated graphics that
  6532. have PureVideo support. There is a static library, "libXvMCNVIDIA.a", and a
  6533. dynamic one, "libXvMCNVIDIA_dynamic.so", which is suitable for dlopening.
  6534. XvMC's "IDCT" and "motion-compensation" levels of acceleration, AI44 and IA44
  6535. subpictures, and 4:2:0 Surfaces up to 2032x2032 are supported.
  6536.  
  6537. libXvMCNVIDIA observes the XVMC_DEBUG environment variable and will provide
  6538. some debug output to stderr when set to an appropriate integer value. '0'
  6539. disables debug output. '1' enables debug output for failure conditions. '2' or
  6540. higher enables output of warning messages.
  6541.  
  6542. ______________________________________________________________________________
  6543.  
  6544. Appendix H. Tips for New Linux Users
  6545. ______________________________________________________________________________
  6546.  
  6547. This installation guide assumes that the user has at least a basic
  6548. understanding of Linux techniques and terminology. In this section we provide
  6549. tips that the new user may find helpful. While the these tips are meant to
  6550. clarify and assist users in installing and configuring the NVIDIA Linux
  6551. Driver, it is by no means a tutorial on the use or administration of the Linux
  6552. operating system. Unlike many desktop operating systems, it is relatively easy
  6553. to cause irreparable damage to your Linux system. If you are unfamiliar with
  6554. the use of Linux, we strongly recommend that you seek a tutorial through your
  6555. distributor before proceeding.
  6556.  
  6557.  
  6558. H1. THE COMMAND PROMPT
  6559.  
  6560. While newer releases of Linux bring new desktop interfaces to the user, much
  6561. of the work in Linux takes place at the command prompt. If you are familiar
  6562. with the Windows operating system, the Linux command prompt is analogous to
  6563. the Windows command prompt, although the syntax and use varies somewhat. All
  6564. of the commands in this section are performed at the command prompt. Some
  6565. systems are configured to boot into console mode, in which case the user is
  6566. presented with a prompt at login. Other systems are configured to start the X
  6567. window system, in which case the user must open a terminal or console window
  6568. in order to get a command prompt. This can usually be done by searching the
  6569. desktop menus for a terminal or console program. While it is customizable, the
  6570. basic prompt usually consists of a short string of information, one of the
  6571. characters '#', '$', or '%', and a cursor (possibly flashing) that indicates
  6572. where the user's input will be displayed.
  6573.  
  6574.  
  6575. H2. NAVIGATING THE DIRECTORY STRUCTURE
  6576.  
  6577. Linux has a hierarchical directory structure. From anywhere in the directory
  6578. structure, the 'ls' command will list the contents of that directory. The
  6579. 'file' command will print the type of files in a directory. For example,
  6580.  
  6581.    % file filename
  6582.  
  6583. will print the type of the file 'filename'. Changing directories is done with
  6584. the 'cd' command.
  6585.  
  6586.    % cd dirname
  6587.  
  6588. will change the current directory to 'dirname'. From anywhere in the directory
  6589. structure, the command 'pwd' will print the name of the current directory.
  6590. There are two special directories, '.' and '..', which refer to the current
  6591. directory and the next directory up the hierarchy, respectively. For any
  6592. commands that require a file name or directory name as an argument, you may
  6593. specify the absolute or the relative paths to those elements. An absolute path
  6594. begins with the "/" character, referring to the top or root of the directory
  6595. structure. A relative path begins with a directory in the current working
  6596. directory. The relative path may begin with '.' or '..'. Elements of a path
  6597. are separated with the "/" character. As an example, if the current directory
  6598. is '/home/jesse' and the user wants to change to the '/usr/local' directory,
  6599. he can use either of the following commands to do so:
  6600.  
  6601.    % cd /usr/local
  6602.  
  6603. or
  6604.  
  6605.    % cd ../../usr/local
  6606.  
  6607.  
  6608.  
  6609. H3. FILE PERMISSIONS AND OWNERSHIP
  6610.  
  6611. All files and directories have permissions and ownership associated with them.
  6612. This is useful for preventing non-administrative users from accidentally (or
  6613. maliciously) corrupting the system. The permissions and ownership for a file
  6614. or directory can be determined by passing the -l option to the 'ls' command.
  6615. For example:
  6616.  
  6617. % ls -l
  6618. drwxr-xr-x     2    jesse    users    4096    Feb     8 09:32 bin
  6619. drwxrwxrwx    10    jesse    users    4096    Feb    10 12:04 pub
  6620. -rw-r--r--     1    jesse    users      45    Feb     4 03:55 testfile
  6621. -rwx------     1    jesse    users      93    Feb     5 06:20 myprogram
  6622. -rw-rw-rw-     1    jesse    users     112    Feb     5 06:20 README
  6623. %
  6624.  
  6625. The first character column in the first output field states the file type,
  6626. where 'd' is a directory and '-' is a regular file. The next nine columns
  6627. specify the permissions (see paragraph below) of the element. The second field
  6628. indicates the number of files associated with the element, the third field
  6629. indicates the owner, the fourth field indicates the group that the file is
  6630. associated with, the fifth field indicates the size of the element in bytes,
  6631. the sixth, seventh and eighth fields indicate the time at which the file was
  6632. last modified and the ninth field is the name of the element.
  6633.  
  6634. As stated, the last nine columns in the first field indicate the permissions
  6635. of the element. These columns are grouped into threes, the first grouping
  6636. indicating the permissions for the owner of the element ('jesse' in this
  6637. case), the second grouping indicating the permissions for the group associated
  6638. with the element, and the third grouping indicating the permissions associated
  6639. with the rest of the world. The 'r', 'w', and 'x' indicate read, write and
  6640. execute permissions, respectively, for each of these associations. For
  6641. example, user 'jesse' has read and write permissions for 'testfile', users in
  6642. the group 'users' have read permission only, and the rest of the world also
  6643. has read permissions only. However, for the file 'myprogram', user 'jesse' has
  6644. read, write and execute permissions (suggesting that 'myprogram' is a program
  6645. that can be executed), while the group 'users' and the rest of the world have
  6646. no permissions (suggesting that the owner doesn't want anyone else to run his
  6647. program). The permissions, ownership and group associated with an element can
  6648. be changed with the commands 'chmod', 'chown' and 'chgrp', respectively. If a
  6649. user with the appropriate permissions wanted to change the user/group
  6650. ownership of 'README' from jesse/users to joe/admin, he would do the
  6651. following:
  6652.  
  6653.    # chown joe README
  6654.    # chgrp admin README
  6655.  
  6656. The syntax for chmod is slightly more complicated and has several variations.
  6657. The most concise way of setting the permissions for a single element uses a
  6658. triplet of numbers, one for each of user, group and world. The value for each
  6659. number in the triplet corresponds to a combination of read, write and execute
  6660. permissions. Execute only is represented as 1, write only is represented as 2,
  6661. and read only is represented as 4. Combinations of these permissions are
  6662. represented as sums of the individual permissions. Read and execute is
  6663. represented as 5, where as read, write and execute is represented as 7. No
  6664. permissions is represented as 0. Thus, to give the owner read, write and
  6665. execute permissions, the group read and execute permissions and the world no
  6666. permissions, a user would do as follows:
  6667.  
  6668.    % chmod 750 myprogram
  6669.  
  6670.  
  6671.  
  6672. H4. THE SHELL
  6673.  
  6674. The shell provides an interface between the user and the operating system. It
  6675. is the job of the shell to interpret the input that the user gives at the
  6676. command prompt and call upon the system to do something in response. There are
  6677. several different shells available, each with somewhat different syntax and
  6678. capabilities. The two most common flavors of shells used on Linux stem from
  6679. the Bourne shell ('sh') and the C-shell ('csh') Different users have
  6680. preferences and biases towards one shell or the other, and some certainly make
  6681. it easier (or at least more intuitive) to do some things than others. You can
  6682. determine your current shell by printing the value of the 'SHELL' environment
  6683. variable from the command prompt with
  6684.  
  6685.    % echo $SHELL
  6686.  
  6687. You can start a new shell simply by entering the name of the shell from the
  6688. command prompt:
  6689.  
  6690.    % csh
  6691.  
  6692. or
  6693.  
  6694.    % sh
  6695.  
  6696. and you can run a program from within a specific shell by preceding the name
  6697. of the executable with the name of the shell in which it will be run:
  6698.  
  6699.    % sh myprogram
  6700.  
  6701. The user's default shell at login is determined by whoever set up his account.
  6702. While there are many syntactic differences between shells, perhaps the one
  6703. that is encountered most frequently is the way in which environment variables
  6704. are set.
  6705.  
  6706.  
  6707. H5. SETTING ENVIRONMENT VARIABLES
  6708.  
  6709. Every session has associated with it environment variables, which consist of
  6710. name/value pairs and control the way in which the shell and programs run from
  6711. the shell behave. An example of an environment variable is the 'PATH'
  6712. variable, which tells the shell which directories to search when trying to
  6713. locate an executable file that the user has entered at the command line. If
  6714. you are certain that a command exists, but the shell complains that it cannot
  6715. be found when you try to execute it, there is likely a problem with the 'PATH'
  6716. variable. Environment variables are set differently depending on the shell
  6717. being used. For the Bourne shell ('sh'), it is done as:
  6718.  
  6719.    % export MYVARIABLE="avalue"
  6720.  
  6721. for the C-shell, it is done as:
  6722.  
  6723.    % setenv MYVARIABLE "avalue"
  6724.  
  6725. In both cases the quotation marks are only necessary if the value contains
  6726. spaces. The 'echo' command can be used to examine the value of an environment
  6727. variable:
  6728.  
  6729.    % echo $MYVARIABLE
  6730.  
  6731. Commands to set environment variables can also include references to other
  6732. environment variables (prepended with the "$" character), including
  6733. themselves. In order to add the path '/usr/local/bin' to the beginning of the
  6734. search path, and the current directory '.' to the end of the search path, a
  6735. user would enter
  6736.  
  6737.    % export PATH=/usr/local/bin:$PATH:.
  6738.  
  6739. in the Bourne shell, and
  6740.  
  6741.    % setenv PATH /usr/local/bin:${PATH}:.
  6742.  
  6743. in C-shell. Note the curly braces are required to protect the variable name in
  6744. C-shell.
  6745.  
  6746.  
  6747. H6. EDITING TEXT FILES
  6748.  
  6749. There are several text editors available for the Linux operating system. Some
  6750. of these editors require the X window system, while others are designed to
  6751. operate in a console or terminal. It is generally a good thing to be competent
  6752. with a terminal-based text editor, as there are times when the files necessary
  6753. for X to run are the ones that must be edited. Three popular editors are 'vi',
  6754. 'pico' and 'emacs', each of which can be started from the command line,
  6755. optionally supplying the name of a file to be edited. 'vi' is arguably the
  6756. most ubiquitous as well as the least intuitive of the three. 'pico' is
  6757. relatively straightforward for a new user, though not as often installed on
  6758. systems. If you don't have 'pico', you may have a similar editor called
  6759. 'nano'. 'emacs' is highly extensible and fairly widely available, but can be
  6760. somewhat unwieldy in a non-X environment. The newer versions each come with
  6761. online help, and offline help can be found in the manual and info pages for
  6762. each (see the section on Linux Manual and Info pages). Many programs use the
  6763. 'EDITOR' environment variable to determine which text editor to start when
  6764. editing is required.
  6765.  
  6766.  
  6767. H7. ROOT USER
  6768.  
  6769. Upon installation, almost all distributions set up the default administrative
  6770. user with the username 'root'. There are many things on the system that only
  6771. 'root' (or a similarly privileged user) can do, one of which is installing the
  6772. NVIDIA Linux Driver. WE MUST EMPHASIZE THAT ASSUMING THE IDENTITY OF 'root' IS
  6773. INHERENTLY RISKY AND AS 'root' IT IS RELATIVELY EASY TO CORRUPT YOUR SYSTEM OR
  6774. OTHERWISE RENDER IT UNUSABLE. There are three ways to become 'root'. You may
  6775. log in as 'root' as you would any other user, you may use the switch user
  6776. command ('su') at the command prompt, or, on some systems, use the 'sudo'
  6777. utility, which allows users to run programs as 'root' while keeping a log of
  6778. their actions. This last method is useful in case a user inadvertently causes
  6779. damage to the system and cannot remember what he has done (or prefers not to
  6780. admit what he has done). It is generally a good practice to remain 'root' only
  6781. as long as is necessary to accomplish the task requiring 'root' privileges
  6782. (another useful feature of the 'sudo' utility).
  6783.  
  6784.  
  6785. H8. BOOTING TO A DIFFERENT RUNLEVEL
  6786.  
  6787. Runlevels in Linux dictate which services are started and stopped
  6788. automatically when the system boots or shuts down. The runlevels typically
  6789. range from 0 to 6, with runlevel 5 typically starting the X window system as
  6790. part of the services (runlevel 0 is actually a system halt, and 6 is a system
  6791. reboot). It is good practice to install the NVIDIA Linux Driver while X is not
  6792. running, and it is a good idea to prevent X from starting on reboot in case
  6793. there are problems with the installation (otherwise you may find yourself with
  6794. a broken system that automatically tries to start X, but then hangs during the
  6795. startup, preventing you from doing the repairs necessary to fix X). Depending
  6796. on your network setup, runlevels 1, 2 or 3 should be sufficient for installing
  6797. the Driver. Level 3 typically includes networking services, so if utilities
  6798. used by the system during installation depend on a remote filesystem, Levels 1
  6799. and 2 will be insufficient. If your system typically boots to a console with a
  6800. command prompt, you should not need to change anything. If your system
  6801. typically boots to the X window system with a graphical login and desktop, you
  6802. must both exit X and change your default runlevel.
  6803.  
  6804. On most distributions, the default runlevel is stored in the file
  6805. '/etc/inittab', although you may have to consult the guide for your own
  6806. distribution. The line that indicates the default runlevel appears as
  6807.  
  6808.    id:n:initdefault:
  6809.  
  6810. or similar, where "n" indicates the number of the runlevel. '/etc/inittab'
  6811. must be edited as root. Please read the sections on editing files and root
  6812. user if you are unfamiliar with this concept. Also, it is recommended that you
  6813. create a copy of the file prior to editing it, particularly if you are new to
  6814. Linux text editors, in case you accidentally corrupt the file:
  6815.  
  6816.    # cp /etc/inittab /etc/inittab.original
  6817.  
  6818. The line should be edited such that an appropriate runlevel is the default (1,
  6819. 2, or 3 on most systems):
  6820.  
  6821.    id:3:initdefault:
  6822.  
  6823. After saving the changes, exit X. After the Driver installation is complete,
  6824. you may revert the default runlevel to its original state, either by editing
  6825. the '/etc/inittab' again or by moving your backup copy back to its original
  6826. name.
  6827.  
  6828. Different distributions provide different ways to exit X. On many systems, the
  6829. 'init' utility will change the current runlevel. This can be used to change to
  6830. a runlevel in which X is not running.
  6831.  
  6832.    # init 3
  6833. http://pastebin.com/2d1APr5C
  6834.  
  6835. There are other methods by which to exit X. Please consult your distribution.
  6836.  
  6837.  
  6838. H9. LINUX MANUAL AND INFO PAGES
  6839.  
  6840. System manual or info pages are usually installed during installation. These
  6841. pages are typically up-to-date and generally contain a comprehensive listing
  6842. of the use of programs and utilities on the system. Also, many programs
  6843. include the --help option, which usually prints a list of common options for
  6844. that program. To view the manual page for a command, enter
  6845.  
  6846.    % man commandname
  6847.  
  6848. at the command prompt, where commandname refers to the command in which you
  6849. are interested. Similarly, entering
  6850.  
  6851.    % info commandname
  6852.  
  6853. will bring up the info page for the command. Depending on the application, one
  6854. or the other may be more up-to-date. The interface for the info system is
  6855. interactive and navigable. If you are unable to locate the man page for the
  6856. command you are interested in, you may need to add additional elements to your
  6857. 'MANPATH' environment variable. See the section on environment variables.
Add Comment
Please, Sign In to add comment