Do you need to simulate the full chip?

Do you need to simulate the full chip?

All chip designers know the importance of making sure our silicon is right the first time. But we need to balance that with the reality of finite budgets for development and testing. So where do you draw the line?

Ideally, you would test each block rigorously. Then combine them into subsystems that were again tested. Finally, you would prove by simulation at the chip level that the chip performs as expected. The lower down this chain you find bugs, the easier, quicker and therefore cheaper to fix them.

Modern SoC designs try to bypass at least some of this testing. They do this by using blocks that are already silicon proven and so don’t need testing at the block level. Instead, you can focus on subsystem and chip level tests. This is necessary. Replicating the level of testing performed by the IP vendors would be uneconomic. For example, ARM spends a lot of effort testing their cores. Those cores are also silicon proven and shown to work by all their customers. Unless it is being used in a nonstandard way retesting the core would be a waste of effort. An SoC design team would only consider retesting such a core if they could not trust the vendor.

So some reduction of verification time is possible by using proven IP. But we are still left simulating at chip level all the complex functionality of a modern SoC.

What is the problem?

问题的关键是时间。大薯片time to simulate and complex tests take time to write. A big modern SoC will usually have an embedded processor. It will have many different pieces of functionality. These will include mixtures of digital and analog blocks. There will often be many configurations of the chip. To simulate all possible corners of such a chip can mean a huge number of complex tests.

For the digital parts of a chip, we can use more advanced verification techniques to reduce test time. Constrained random verification is an example of one of these. Traditionally the developer creates complex directed tests. The developer specifies exactly what stimulus the test should apply to the chip. They also specify what the expected response should be. Constrained random verification has the computer take over some of this work. The technique randomly exercises a chip. It then shows that the chip hits all corners required and generates the correct response. So you are trading some developer time for test execution time. This can be helpful but still leaves us with some very long tests to completely simulate the chip. Long tests using expensive licenses can become a problem.

Big chips can also be complex to configure and setup. It is often desirable to have a lot of functionality and a lot of flexibility in a chip. It can then often be difficult to understand how to configure it for testing. Learning how to configure and use the chip takes time for the verification engineers. Even the best documentation and access to designers can’t remove this cost. This is a problem even if you are doing constrained random verification.

Chips that combine complex digital and analog logic adds to the difficulty. Analog simulations of the full chip can be very slow. Yet it is important to simulate both analog and digital together. Not to do so allows for errors in the interfaces between them to go undetected. But detailed co-simulation of analog and digital is slow and will drive up costs.

It is also not uncommon to have many analog blocks that must work together. We can’t guarantee performance without simulations of the end to end analog chain. And yet these end to end simulations can be time-consuming.

How do we solve it?

The initial architectural partitioning of the chip is crucial in chip design. It can solve or cause many problems. Good architectural choices early on can reduce the verification time.

As discussed, highly flexible and configurable chips can be difficult to test. However, S3semi can architect the chip with an easy to understand control philosophy. Then writing tests will be easier. As will using the chip when it goes to manufacturing.

We can partition the chip into well-defined functional blocks. These blocks should have well-defined easily understood and easily tested interfaces. These blocks should be carefully tested. Then at the chip level, we only need to test the connections between these blocks at the chip level. This is especially true if you can use silicon-proven IP for these blocks. For example, a processor is a very complex piece of IP to test. But we can use a silicon-proven core and interface to well-defined blocks. This means you only need to check the memory/IO interfaces and interrupt lines at the chip level.

We can also architect to remove as many of the analog to digital interfaces as possible. Particularly those that loop from digital to analog and back to digital. Sometimes loops such as these are unavoidable. Then we can bring the analog and digital blocks involved together as one sub-block. This sub-block can be well tested in isolation relatively quickly. We can then reduce testing of these blocks in full chip simulations.

We can also create simplified AMS models of all analog components. We should make these early in the design process and keep them up to date throughout. This allows for simplified testing of the analog-digital interfaces. It also speeds up testing of the full analog chain. We can compare each AMS model to the actual analog performance. The AMS models can then be updated to account for any discrepancies. This means we only need to perform a limited amount of testing of the whole chip with the full analog design.

In short: good architectural partitioning and an early verification strategy are essential. Together they greatly reduce the time and cost in test.