We will try to talk in general about the verification of digital circuits.
Verification in this area is an important process that requires the involvement of experienced engineers. For example, a verification specialist working on systems with a CPU, as a rule, must own scripting languages and command shell languages (Tcl, bash, Makefile, etc.), programming languages (C, C ++, assembler), HDL / HDVL (SystemVerilog , Appendix C – History of the Language] , Verilog, VHDL), modern methodologies and frameworks (UVM).
The proportion of time spent on verification reaches 70-80% of the total project time. One of the main reasons for such attention is that it is impossible to release a “patch” to the chip after it has been put into production, you can only release “silicon errata” (this does not apply to FPGA / FPGA projects).
By digital circuits, I mean:
- complex functional blocks / intellectual properties (SFB / IP);
- specialized custom chips / application-specific integrated circuit – (ASIC);
- programmable logic integrated circuits / field-programmable gate – array (FPGA / FPGA);
- systems on a chip / system-on-crystal (SoC / SoC);
Actual verification problems
The current state and trends in the field of verification can be judged by the following challenges and problems that it faces:
- The size of the verification object (OB) is constantly growing. Even a small microcontroller IC is a set of dozens of submodules, very often with complex functionality. Large ICs are complexes in which there can be up to tens of billions of transistors, and the power management scheme alone can exceed some processors in complexity ;
- It is impossible to draw up a specification for IMS at the beginning of the project and only follow it in the future, it constantly changes throughout the development process (the customer changes the requirements, technical problems or finding more optimal solutions force us to reconsider approaches, etc.). Based on this, all processes should perceive dynamics of specification changes and be modified in accordance with the requirements;
- Often, several teams remote from each other work on project verification, the number of which can reach tens of people;
- The number of individual tests and their types reaches a huge number, their results must be collected and analyzed;
- Modeling even digital systems requires a lot of computer time;
- The completeness of the preparedness targets established for the project largely depends on the competence and intuition of verification specialists;
- Despite the existence of indicators of the project’s coverage with tests (metrics), the only way to complete the verification is to decide to suspend it, based mainly on the following conclusions: the money or time spent on the project stage has to be put into production, it seems like they reached code coverage of 100 %, we’ve been testing it for a week and have not found any errors, etc.
Verification of digital circuits can be divided into the following main types:
1. functional verification – the name speaks for itself, you check whether your system is performing its functions correctly;
2. formal verification – with this verification, the equivalence of the representations of your system at different stages of the design route or the fulfillment of the statements placed in the source code is established:
- Equivalence Checking (e.g. RTL-to-RTL, RTL-to-Gate, Gate-to-Gate);
- Property Checking (Model Checking) (checks the properties (assertions) specified in the code using SVA (for example)).
3. static code analysis – checking the source code according to formal criteria for compliance with the rules for using the language and its constructions. Very often, the configured verification rules are sent to RMM . Programs for such verification are usually referred to as lint or linter;
4. physical verification – basically implies DRC, LVS, PERC, etc. checks, the physical performance of the system is checked for compliance with technological standards and compliance with physical and logical representations, etc. The composition of the checks is highly technology dependent. Typically, physical verification is carried out by an engineer or topological design team.
5. prototyping – the use of FPGA for functional verification .
Functional verification in the scope of all work is most significant and requires direct human involvement.
Static code analysis requires only initial configuration of the tools, which corresponds to the internal design rules adopted by the company, then the tool is engaged in the fact that it provides “valuable guidance” to developers and does not require constant supervision.
Formal verification tools are often also very independent, it requires only a careful analysis of the reports that they generate. They are also suitable for reverse engineering, when for some reason you know you have to restore the code from the list of circuits.
Verification Tools Examples
Examples of digital verification tools (digital-on-top route):
- verification management tools
- vManager – Cadence
- Verdi, VC Execution Manager (“ExecMan”) – Synopsys
- Questa Verification Management – Mentor Graphics
- functional – usually simulators
- Incisive, Xcelium – Cadence
- VCS – Synopsys
- ModelSim, QuestaSim – Mentor Graphics
- simulators Verilator, Icarus – free software from independent developers
- JasperGold, Conformal LEC, Incisive Formal Verification Platform – Cadence
- Formality, VC Formal – Synopsys
- Formal Pro, Questa Formal Verification – Mentor Graphics
- static code analysis
- SpyGlass Lint – Synopsys
- Questa AutoCheck – Mentor Graphics
- physical verification
- Pegasus, Physical Verification System – Cadence
- Hercules, IC Validator – Synopsys
- Caliber – Mentor Graphics
Functional Verification Methods
Functional verification – is a set of tests, I will conditionally allow myself to be divided into three groups (this is not a dogma, this is from personal experience):
- Positive branches – verification of behavior in regular situations regulated by the device specification or standard, etc. Those. We check situations when everything is going well.
- Negative branches – checking deviations from standard situations, but within the framework of a specification or standard, for example – a mismatch of the checksum, the number of received data, etc. Those. when something goes wrong, but we knew that this could be and we know how to work in this situation.
- Non-standard situations – any random situations from violations of communication protocols, the order of data, to physical collisions in interfaces, random changes in the state of logic elements, etc. Those. this is when anything can happen and you need to make sure that the OB will come out after this into working condition.
The first two stages can be automated using UVC / VIP (Universal Verification Component / Verification IP) and quite quickly there you can increase the volume of various tests, including those generated automatically. The third stage is a “masterpiece” in verification, this stage requires an extraordinary approach and experience, it is very difficult to automate, because most situations are a separate algorithm, perhaps a script for CAD or instructions for “manual” checks.
Types of functional verification metrics
Metrics are indicators of a project’s test coverage. They are needed in order to understand what other tests need to be developed to verify possible situations and how much time verification may take .
Unfortunately, only one type of metric is evaluated based on the source code of the project, the definition of criteria for the remaining types is the result of intellectual work.
In addition, it must be remembered that the achievement of the desired indicators by one type of metric does not mean working capacity in general, it is always necessary to evaluate the complex.
Types of metrics :
- functional coating. Shows how much the OB function has been tested. The criteria for this coverage can be determined by the testing plan and the introduction of special constructions (covergroup ) in the test environment and / or OM, which track whether this or that function / action was performed or not, whether the data changed in a certain way, etc. Information from designs embedded in the source code can be automatically collected by CAD.
- code coverage – changing the state of source code constructions during tests. It is assembled automatically by CAD, does not require the introduction of any structures in the source code.
- switching registers (Toggle Coverage);
- activity of each line of code (Line Coverage);
- activity of expressions (Statement Coverage), in fact – this is Line Coverage, but can track expressions that are more than one line in the editor;
- activity of a code segment inside a conditional statement or procedure (Block Coverage), a variation of Statement Coverage;
- activity of all branches of conditional statements such as if, case, while, repeat, forever, for, loop (Branch Coverage);
change of all states (true, false) of the component logical expressions (Expression Coverage);
- state of the state machine (Finite-State Machine Coverage).
claims cover. Statements are special language constructs that track various events and sequences, and according to specified criteria determine the legality of their occurrence.
Functional Verification Methods
Directed Tests Method (DTM)
Direct, meaningful tests. If this method is adopted in the project, then the verification plan is composed of tests aimed at checking the behavior of organic matter at specific points of interest (states). Checking all possible situations, especially in complex projects, is almost impossible.
At the same time, problems that may arise in situations not covered by tests are not detected before the device is started to be used in real conditions. Typically, these tests use functional coverage metrics.
Coverage-Driven Verification, Metric-Driven Verification (CDV, MDV)
The concept of creating tests aimed at achieving a certain “test coverage” of organic substances. They rely on metrics to understand which tests need to be added to the verification plan in order to achieve the project readiness targets.
You need to use coverage analysis tools to see what else to add to the verification plan. In fact, if we start to adjust the verification plan in the DTM, relying at least on the “code coverage”, then we can already assume that we smoothly switched from DTM to CDV.
Constrained Random Verification (CRV)
Verification by submitting random influences. These are really automatic tests with the generation of random effects on OM, but it is difficult to imagine them without symbiosis with ABV.
The method is very expensive at first, because It takes a long time to prepare the tools. After the initial stage of preparation is completed, testing can be started automatically, repeatedly with different initial data. If an assertion mismatch is detected, the development and verification team starts analyzing the detected error.
In a real project, one cannot limit oneself only to this method, because Using this method, you can collect code coverage and statement coverage, and they can say nothing about the correct operation of the OS, i.e. compliance with specifications. It must be supplemented with functional tests.
To implement this methodology requires:
- implement “assertion” in all important points of the source code of the OB and the test environment;
- to develop generators of random effects and scenarios of their work, i.e. impacts are random, but have range limitations (we don’t have time to sort out everything), the order of filing, etc.
Assertion Based Verification (ABV)
Verification using statements. Probably, this is not even an independent method, but some component or basic component of the above.
An important issue with ABV is how to distribute assertions, which ones are best placed in the source code of the OB, which ones should be in the test environment.
It should be noted right away that the Verilog language does not have assertions in its standard (they can be created using the basic language constructs, but directives are needed for the synthesizer so that it does not deal with their conversion). Assertions appear only in the SystemVerilog standard, and they were also originally in the VHDL and e standard.
I suggest that you familiarize yourself with the recommendations of specialists, including Clifford’s Cummings’s , articles on SVA on the distribution of works on their writing, as well as materials on ABV on the Verification Academy website .