diff docs/TestSuiteMakefileGuide.rst @ 148:63bd29f05246

merged
author Shinji KONO <kono@ie.u-ryukyu.ac.jp>
date Wed, 14 Aug 2019 19:46:37 +0900
parents c2174574ed3a
children
line wrap: on
line diff
--- a/docs/TestSuiteMakefileGuide.rst	Sun Dec 23 19:23:36 2018 +0900
+++ b/docs/TestSuiteMakefileGuide.rst	Wed Aug 14 19:46:37 2019 +0900
@@ -1,175 +1,27 @@
-=====================
-LLVM test-suite Guide
-=====================
+======================================
+test-suite Makefile Guide (deprecated)
+======================================
 
 .. contents::
-   :local:
+    :local:
 
 Overview
 ========
 
-This document describes the features of the Makefile-based LLVM
-test-suite as well as the cmake based replacement. This way of interacting
-with the test-suite is deprecated in favor of running the test-suite using LNT,
-but may continue to prove useful for some users. See the Testing
-Guide's :ref:`test-suite Quickstart <test-suite-quickstart>` section for more
-information.
-
-Test suite Structure
-====================
-
-The ``test-suite`` module contains a number of programs that can be
-compiled with LLVM and executed. These programs are compiled using the
-native compiler and various LLVM backends. The output from the program
-compiled with the native compiler is assumed correct; the results from
-the other programs are compared to the native program output and pass if
-they match.
-
-When executing tests, it is usually a good idea to start out with a
-subset of the available tests or programs. This makes test run times
-smaller at first and later on this is useful to investigate individual
-test failures. To run some test only on a subset of programs, simply
-change directory to the programs you want tested and run ``gmake``
-there. Alternatively, you can run a different test using the ``TEST``
-variable to change what tests or run on the selected programs (see below
-for more info).
-
-In addition for testing correctness, the ``test-suite`` directory also
-performs timing tests of various LLVM optimizations. It also records
-compilation times for the compilers and the JIT. This information can be
-used to compare the effectiveness of LLVM's optimizations and code
-generation.
-
-``test-suite`` tests are divided into three types of tests: MultiSource,
-SingleSource, and External.
-
--  ``test-suite/SingleSource``
-
-   The SingleSource directory contains test programs that are only a
-   single source file in size. These are usually small benchmark
-   programs or small programs that calculate a particular value. Several
-   such programs are grouped together in each directory.
-
--  ``test-suite/MultiSource``
-
-   The MultiSource directory contains subdirectories which contain
-   entire programs with multiple source files. Large benchmarks and
-   whole applications go here.
-
--  ``test-suite/External``
-
-   The External directory contains Makefiles for building code that is
-   external to (i.e., not distributed with) LLVM. The most prominent
-   members of this directory are the SPEC 95 and SPEC 2000 benchmark
-   suites. The ``External`` directory does not contain these actual
-   tests, but only the Makefiles that know how to properly compile these
-   programs from somewhere else. The presence and location of these
-   external programs is configured by the test-suite ``configure``
-   script.
-
-Each tree is then subdivided into several categories, including
-applications, benchmarks, regression tests, code that is strange
-grammatically, etc. These organizations should be relatively self
-explanatory.
-
-Some tests are known to fail. Some are bugs that we have not fixed yet;
-others are features that we haven't added yet (or may never add). In the
-regression tests, the result for such tests will be XFAIL (eXpected
-FAILure). In this way, you can tell the difference between an expected
-and unexpected failure.
-
-The tests in the test suite have no such feature at this time. If the
-test passes, only warnings and other miscellaneous output will be
-generated. If a test fails, a large <program> FAILED message will be
-displayed. This will help you separate benign warnings from actual test
-failures.
-
-Running the test suite via CMake
-================================
-
-To run the test suite, you need to use the following steps:
-
-#. The test suite uses the lit test runner to run the test-suite,
-   you need to have lit installed first.  Check out LLVM and install lit:
-   
-   .. code-block:: bash
-
-       % svn co http://llvm.org/svn/llvm-project/llvm/trunk llvm
-       % cd llvm/utils/lit
-       % sudo python setup.py install # Or without sudo, install in virtual-env.
-       running install
-       running bdist_egg
-       running egg_info
-       writing lit.egg-info/PKG-INFO
-       ...
-       % lit --version
-       lit 0.5.0dev
-
-#. Check out the ``test-suite`` module with:
-
-   .. code-block:: bash
-
-       % svn co http://llvm.org/svn/llvm-project/test-suite/trunk test-suite
-
-#. Use CMake to configure the test suite in a new directory. You cannot build
-   the test suite in the source tree.
-
-   .. code-block:: bash
-   
-       % mkdir test-suite-build
-       % cd test-suite-build
-       % cmake ../test-suite
-
-#. Build the benchmarks, using the makefiles CMake generated.
-
-.. code-block:: bash
-
-    % make
-    Scanning dependencies of target timeit-target
-    [  0%] Building C object tools/CMakeFiles/timeit-target.dir/timeit.c.o
-    [  0%] Linking C executable timeit-target
-    [  0%] Built target timeit-target
-    Scanning dependencies of target fpcmp-host
-    [  0%] [TEST_SUITE_HOST_CC] Building host executable fpcmp
-    [  0%] Built target fpcmp-host
-    Scanning dependencies of target timeit-host
-    [  0%] [TEST_SUITE_HOST_CC] Building host executable timeit
-    [  0%] Built target timeit-host
-
-    
-#. Run the tests with lit:
-
-.. code-block:: bash
-
-    % lit -v -j 1 . -o results.json
-    -- Testing: 474 tests, 1 threads --
-    PASS: test-suite :: MultiSource/Applications/ALAC/decode/alacconvert-decode.test (1 of 474)
-    ********** TEST 'test-suite :: MultiSource/Applications/ALAC/decode/alacconvert-decode.test' RESULTS **********
-    compile_time: 0.2192 
-    exec_time: 0.0462 
-    hash: "59620e187c6ac38b36382685ccd2b63b" 
-    size: 83348 
-    **********
-    PASS: test-suite :: MultiSource/Applications/ALAC/encode/alacconvert-encode.test (2 of 474)
-
-
-Running the test suite via Makefiles (deprecated)
-=================================================
-
 First, all tests are executed within the LLVM object directory tree.
 They *are not* executed inside of the LLVM source tree. This is because
 the test suite creates temporary files during execution.
 
 To run the test suite, you need to use the following steps:
 
-#. ``cd`` into the ``llvm/projects`` directory in your source tree.
 #. Check out the ``test-suite`` module with:
 
    .. code-block:: bash
 
-       % svn co http://llvm.org/svn/llvm-project/test-suite/trunk test-suite
+       % git clone https://github.com/llvm/llvm-test-suite.git test-suite
 
-   This will get the test suite into ``llvm/projects/test-suite``.
+#. FIXME: these directions are outdated and won't work. Figure out
+   what the correct thing to do is, and write it down here.
 
 #. Configure and build ``llvm``.
 
@@ -208,7 +60,7 @@
 again (unless the test code or configure script changes).
 
 Configuring External Tests
---------------------------
+==========================
 
 In order to run the External tests in the ``test-suite`` module, you
 must specify *--with-externals*. This must be done during the
@@ -237,8 +89,8 @@
 Others are added from time to time, and can be determined from
 ``configure``.
 
-Running different tests
------------------------
+Running Different Tests
+=======================
 
 In addition to the regular "whole program" tests, the ``test-suite``
 module also provides a mechanism for compiling the programs in different
@@ -257,8 +109,8 @@
 writing your own TEST Makefile for any optimization or analysis passes
 that you develop with LLVM.
 
-Generating test output
-----------------------
+Generating Test Output
+======================
 
 There are a number of ways to run the tests and generate output. The
 most simple one is simply running ``gmake`` with no arguments. This will
@@ -283,8 +135,8 @@
 called ``report.<type>.raw.out`` containing the output of the entire
 test run.
 
-Writing custom tests for the test suite
----------------------------------------
+Writing Custom Tests for the test-suite
+=======================================
 
 Assuming you can run the test suite, (e.g.
 "``gmake TEST=nightly report``" should work), it is really easy to run