Introduction to turbo codes pdf


















TurboFan works on a program representation called a sea of nodes. Nodes can represent arithmetic operations, load, stores, calls, constants etc. There are three types of edges that we describe one by one below. Control edges are the same kind of edges that you find in Control Flow Graphs.

They enable branches and loops. As such, there is an effect edge between the load and the store. Also, you need to increment the read property before storing it. Therefore, you need an effect edge between the load and the addition.

In this article we want to focus on how v8 generates optimized code using TurboFan. As mentioned just before, TurboFan works with sea of nodes and we want to understand how this graph evolves through all the optimizations. This is particularly interesting to us because some very powerful security bugs have been found in this area. Recent TurboFan vulnerabilities include incorrect typing of Math. In order to understand what happens, you really need to read the code. Here are a few places you want to look at in the source folder :.

Run your code with --trace-turbo to generate trace files for turbolizer. We can look at the very first generated graph by selecting the "bytecode graph builder" option. The JSCall node corresponds to the Math. After graph creation comes the optimization phases, which as the name implies run various optimization passes.

An optimization pass can be called during several phases. One of its early optimization phase, is called the TyperPhase and is run by OptimizeGraph. The code is pretty self-explanatory. If we read the code of JSCallTyper , we see that whenever the called function is a builtin, it will associate a Type with it. For instance, in the case of a call to the MathRandom builtin, it knows that the expected return type is a Type::PlainNumber.

For the NumberConstant nodes it's easy. We simply read TypeNumberConstant. In most case, the type will be Range. What about those SpeculativeNumberAdd now? We need to look at the OperationTyper. To get the types of the right input node and the left input node, we call SpeculativeToNumber on both of them. NumberAdd mostly checks for some corner cases like if one of the two types is a MinusZero for instance.

In most cases, the function will simply return the PlainNumber type. Let's quickly check the sea of nodes to indeed observe the addition of the LoadField and the change of opcode of the node 25 note that it is the same node as before, only the opcode changed.

Previously, we encountered various types including the Range type. However, it was always the case of Range n,n of size 1. In SSA form , each variable can be assigned only once. So x0 and x1 will be created for 10 and 5 at lines [1] and [2].

At line [3], the value of x x2 in SSA will be either x0 or x1 , hence the need of a phi function. So what about types now? The type of the constant 10 x0 is Range 10,10 and the range of constant 5 x1 is Range 5,5.

Without surprise, the type of the phi node is the union of the two ranges which is Range 5, To understand the typing of the SpeculativeSafeIntegerAdd nodes, we need to go back to the OperationTyper implementation. Min , n. Max , m. Min , m. AddRanger is the function that actually computes the min and max bounds of the Range.

Our final experiment deals with CheckBounds nodes. Basically, nodes with a CheckBounds opcode add bound checks before loads and stores. In order to prevent values[y] from using an out of bounds index, a CheckBounds node is generated.

Here is what the sea of nodes graph looks like right after the escape analysis phase. The cautious reader probably noticed something interesting about the range analysis. The type of the CheckBounds node is Range 0,1! That leads us to an interesting phase: the simplified lowering. And this function, is responsible for CheckBounds elimination which sounds interesting!

Long story short, it compares inputs 0 index and 1 length. If the index's minimum range value is greater than zero or equal to and its maximum range value is less than the length value, it triggers a DeferReplacement which means that the CheckBounds node eventually will be removed! Once again, let's confirm that by playing with the graph. We want to look at the CheckBounds before the simplified lowering and observe its inputs.

We can easily see that Range 0,1. Therefore, node 58 is going to be replaced as proven useless by the optimization passes analysis. If you look at the file stopcode. So, without going into too much details we're going to do one more experiment. Let's make small snippets of code that generate each one of these opcodes. For each one, we want to confirm we've got the expected opcode in the sea of node. In this case, TurboFan speculates that x will be an integer. This guess is made due to the type feedback we mentioned earlier.

Indeed, before kicking out TurboFan, v8 first quickly generates ignition bytecode that gathers type feedback. If you want to know more, Franziska Hinkelmann wrote a blog post about ignition bytecode.

This is exactly the reason why it is called speculative optimization TurboFan makes guesses, assumptions, based on this profiling. If we modify a bit the previous code snippet and use a higher value that can't be represented by a small integer Smi , we'll get a SpeculativeNumberAdd instead. TurboFan speculates about the type of x and relies on type feedback. Thus, the opcode SpeculativeSafeIntegerAdd is being used.

In this case, y is a complex object and we need to call a slow JSAdd opcode to deal with this kind of situation. Like for the SpeculativeNumberAdd example, we add a value that can't be represented by an integer. However, this time there is no speculation involved. There is no need for any kind of type feedback since we can guarantee that y is an integer. There is no way to make y anything other than an integer. Basically that means we've got 4 different code paths read the code comments when reducing a NumberAdd node.

Only one of them leads to a node change. Let's draw a schema representing all of those cases. Nodes in red to indicate they don't satisfy a condition, leading to a return NoChange. The case [4] will take both NumberConstant 's double value and add them together. It will create a new NumberConstant node with a value that is the result of this addition. The node's right input will become the newly created NumberConstant while the left input will be replaced by the left parent's left input.

V8 represents numbers using IEEE doubles. That means it can encode integers using 52 bits. Therefore the maximum value is pow 2,53 -1 which is Number above this value can't all be represented.

As such, there will be precision loss when computing with values greater than that. A quick experiment in JavaScript can demonstrate this problem where we can get to strange behaviors. Let's try to better understand this. When using the normalized form exponent is non null , to compute the value, simply follow the following formula.

You can try the computations using links 1 , 2 and 3. The astonishing performance of turbo codes attracted many researchers and this has resulted in an explosive amount of literature since their introduction few years ago. As turbo codes did not … Expand. Mathematics, Computer Science. View 1 excerpt, cites methods. In this paper we propose the transmission scheme for the physical layer … Expand.

In the … Expand. This paper proposes an improvement method of a lump likelihood ratio calculation in detection of frequency hopping pattern. In a conventional lump likelihood calculation, a lump likelihood ratio is … Expand.

On a turbo decoder design for low power dissipation Master ' s Thesis. Abstract A new coding scheme called "turbo coding" has generated tremendous interest in channel coding of digital communication systems due to its high error correcting capability. Two key … Expand. View 2 excerpts, cites background. An IC for turbo-codes encoding and decoding. Near Shannon limit error-correcting coding and decoding: Turbo-codes.

A new class of convolutional codes called turbo-codes, whose performances in terms of bit error rate BER are close to the Shannon limit, is discussed. The turbo-code encoder is built using a … Expand. Multiple turbo codes. We introduce multiple turbo codes and a suitable decoder structure derived from an approximation to the maximum a posteriori probability MAP decision rule, which is substantially different from the … Expand.

View 1 excerpt, references background. IEEE J. Areas Commun. View 1 excerpt, references methods. Unveiling turbo codes: some results on parallel concatenated coding schemes. IEEE Trans. Terminating the trellis of turbo-codes in the same state. This will improve the performance of turbo-codes when a maximum … Expand. Improving decoder and code structure of parallel concatenated recursive systematic turbo codes.



0コメント

  • 1000 / 1000