Triton v0.8 is Released!

Click the icon to Follow me:- twitterTelegramRedditDiscord

We are pleased to announce that we released Triton v0.8 under the terms of
the Apache License 2.0 (same license as before). This new version provides bug fixes, features and improvements:
the detailed list can be found on this Github page
(there are about 297 changed files with 43,115 additions and 13,579 deletions).
We wrote this blog post to highlight the most important changes from v0.7.

What’s new in v0.8?

First of all, we would like to thank the following contributors who helped make Triton a bit
more powerful every day during the development of v0.8 (thanks all, you are amazing!):

  • Adrien Guinet
  • Alberto Garcia Illera
  • Alexey Nurmukhametov
  • Alexey Vishnyakov
  • Anton Kochkov
  • Benno Fünfstück
  • Christian Heitman
  • Elias Bachaalany
  • Igor Kirillov
  • Luigi Coniglio
  • Mastho
  • Matteo F.
  • Meme
  • Pavel
  • Peter Meerwald-Stadler
  • PixelRick
  • Robin David
  • TechnateNG
  • Toizi
  • Xinyang Ge

The following sub-sections introduce some major improvements between the v0.7 and v0.8 versions.

1 – Implicit concretization when setting a concrete value

Thread: #808.

Triton keeps at each program point a concrete and a symbolic state. When the user modifies a
concrete value at a specific program point, it may imply a de-synchronization between those
two states and, before v0.8, the user had to force the re-synchronization by concretizing
registers or memory cells. For example, we could have a snippet like this:

ctx.setConcreteRegisterValue(ctx.registers.rax, 0x1234)
ctx.concretizeRegister(ctx.registers.rax) # concretize the register which points to an old symbolic expression

With v0.8 you should have something like this:

ctx.setConcreteRegisterValue(ctx.registers.rax, 0x1234) # implicit concretization

2 – Dealing with the path predicate

Thread: #350.

During the execution, Triton builds the path predicate when it encounters conditional instructions. We provided
some new methods which allow the user to deal a bit better with the path predicate. It’s now possible to:

  • remove the last constraint added to the path predicate using popPathConstraint();
  • add new constraints using pushPathConstraint();
  • clear the current path predicate using clearPathConstraints().

We also provided a new method which returns the path predicate to target a basic block address if this one is reachable
during the execution (do not forget that we are in a dynamic analysis context): getPredicatesToReachAddress().

For example, let’s consider at one point we want to add a post condition on our path predicate, such as rax must be
different from 0. The snippet of code should look like this:

if inst.getAddress() == [my target address]:
    rax = ctx.getRegisterAst(ctx.registers.rax)
    ctx.pushPathConstraint(rax != 0)

3 – The CONSTANT_FOLDING optimization

Thread: #835.

We added a new optimization which performs a constant folding at the build time of AST nodes. This optimization
is pretty similar to ONLY_ON_SYMBOLIZED except that the concretization occurs at each level of the AST during
its construction while ONLY_ON_SYMBOLIZED only checks if a root node of a symbolic expression contains symbolic
variables (which does not concretize sub-trees if it is true).

4 – Converting a Z3 expression to a Triton expression

Thread: #850.

It’s now possible to convert a Z3 expression into a Triton expression and vice versa using Python bindings.
Before v0.8, the conversion from z3 to Triton was only possible with the C++ API.

>>> from triton import *
>>> ctx = TritonContext(ARCH.X86_64)
>>> ast = ctx.getAstContext()

>>> x = ast.variable(ctx.newSymbolicVariable(8))
>>> y = ast.variable(ctx.newSymbolicVariable(8))

>>> n = x + y * 2
>>> print(n)
(bvadd SymVar_0 (bvmul SymVar_1 (_ bv2 8)))

>>> z3n = ast.tritonToZ3(n)
>>> print(type(z3n))
<class 'z3.z3.ExprRef'>
>>> print(z3n)
SymVar_0 + SymVar_1*2

>>> ttn = ast.z3ToTriton(z3n)
>>> print(type(ttn))
<class 'AstNode'>
>>> print(ttn)
(bvadd SymVar_0 (bvmul SymVar_1 (_ bv2 8)))

5 – Recursive calls of shared_ptr destructors

Thread: #753.

We use shared_ptr to determine if an AST is still assigned to registers or memory cells. If the reference
number of a shared_ptr is zero, it means that the current state of the execution does not need this AST anymore
and we destroy it in order to free the memory. On paper this idea looks good but there is a specific scenario
where it causes an issue. To really highlight the issue, we have to understand that when a parent P has two children
C1 and C2, these children may also have other children etc. (classical AST form). Each node is a shared_ptr
and possesses a list of children which are shared_ptr (std::vector<std::shared_ptr<AbstractNode>> children).
When the root node P has no more reference to itself, the shared_ptr calls its destructor and then the vector
list of its children is cleared which decreases the number of references to these children which may call their
destructors and so on. On a deep AST, in versions prior to v0.8, this scenario leads to a stack overflow due to the recursion
of shared_ptr destruction. For example, the following snippet of code triggers the bug (on Linux you can set a
small stack size before running this example: ulimit -s 1024).

from triton import *

ctx = TritonContext(ARCH.X86_64)

# Create a deep AST with a reference to previous nodes
for i in range(10000):
    ctx.processing(Instruction(b"x48xffxc0")) # inc rax

# Assign a new AST on rax. The previous AST assigned to rax has no more
# reference and shared_ptr start to destroy themself.
ctx.processing(Instruction(b"x48xc7xc0x00x00x00x00")) # mov rax, 0

I know what you will say “lol, Triton is easily breakable“. Well, it’s true for this scenario (even if
we never found this case in real programs) but it’s a real problem of using shared_ptr on AST (so think twice
before using them on AST).

So now, how can we solve it? A solution could be to keep a reference to every node in the AST manager
(AstContext class) and destroy each shared_ptr with only one reference [1] in a specific order (from down
to up). The problem is that we really want to keep a scalable garbage collector and this solution
does not scale at all (we deal with billions of nodes).

Our solution is to only keep references to nodes which belong to a
depth in the AST which is a multiple of 10000. Thus, when the root node is
destroyed, the stack recursivity stops when the depth level of
10000 is reached, because the nodes there still have a reference to
them in the AST manager. The destruction will continue at the next
allocation of nodes and so on. So, it means that ASTs are destroyed by
steps of depth of 10000 which avoids the overflow while keeping a good
scale. We did some benchmark about this new concept and it does not
impact the performance and it solves the issue so far.

[1] The reference kept in the AST manager.

6 – The quantifier operator: forall

Thread: #860.

After reading a nice blog post about constant
synthesizing, we thought it could be interesting to add the quantifier operator: forall.
For example, let’s assume we want to synthesize the following expression ((x << 8) >> 16) << 8
into x & 0xffff00 where x is a 32-bit vector and the constant 0xffff00 is the unknown.
The SMT query looks like this:

(declare-fun C () (_ BitVec 32))
(assert (forall
            ((x (_ BitVec 32)))
                (bvand x C)
                (bvshl (bvlshr (bvshl x (_ bv8 32)) (_ bv16 32)) (_ bv8 32))

The illustrated SMT query can be read as: There exists a constant C such that for all x the expression x & C is equal
to ((x << 8) >> 16) << 8
. To handle such query in Python with v0.8, you could have a snippet of code like the

#!/usr/bin/env python
## -*- coding: utf-8 -*-
##   $ python ./
##   {1: C:32 = 0xffff00}

from triton import *

ctx = TritonContext(ARCH.X86_64)
ast = ctx.getAstContext()

x = ast.variable(ctx.newSymbolicVariable(32))
c = ast.variable(ctx.newSymbolicVariable(32))


print(ctx.getModel(ast.forall([x], ((x << 8) >> 16) << 8 == x & c)))

7 – Changes to the user API

#865 and

The following v0.7 functions are deprecated and must be replaced by their v0.8 equivalent.

v0.7 v0.8
convertExpressionToSymbolicVariable symbolizeExpression
convertMemoryToSymbolicVariable symbolizeMemory
convertRegisterToSymbolicVariable symbolizeRegister
enableMode setMode
getPathConstraintsAst getPathPredicate
getSymbolicExpressionFromId getSymbolicExpression
getSymbolicVariableFromId getSymbolicVariable
getSymbolicVariableFromName getSymbolicVariable
isMemoryMapped isConcreteMemoryValueDefined
isSymbolicExpressionIdExists isSymbolicExpressionExists
lookingForNodes search
newSymbolicVariable(size, comment=””) newSymbolicVariable(size, alias=””)
symbolizeExpression(id, size, comment=””) symbolizeExpression(id, size, alias=””)
symbolizeMemory(mem, comment=””) symbolizeExpression(mem, alias=””)
symbolizeRegister(reg, comment=””) symbolizeExpression(reg, alias=””)
unmapMemory clearConcreteMemoryValue
unrollAst unroll

8 – ARMv7 support

Thread: #831.

Last but not least, Triton v0.8 introduces yet another architecture: ARMv7.
With this new inclusion, Triton now has support for the most popular
architectures, namely: x86, x86-64, ARM32 and AArch64.

The ubiquity of ARM processors is one of the main reasons for adding support for
ARMv7 in Triton. ARMv7 is a widely popular architecture, particularly in
embedded devices and mobile phones. We wanted to bring the advantages of
Triton to this architecture (most tools are prepared to work on Intel
x86/x86_64 only). The other reason is to show the flexibility and
extensibility of Triton. ARMv7 poses some challenges in terms of
implementation given its many features and peculiarities (some of them quite
different from the rest of the supported architectures). Therefore, ARMv7
makes a great architecture to add to the list of supported ones.

You can start by checking some of the available samples.

Plans for v0.9

About the v0.9 version, our first plan is to integrate the SMT Array logic
which will allow the user to symbolically index memory accesses. This new memory model will not replace the current
one dealing with BV only. Our idea is to provide two memory
models, BV and ABV, and the user will be able to switch from one to
the other according to his/her objectives. Our second plan is to improve the taint analysis integrated in Triton. Currently,
the taint engine is mono-color with an over-approximation making it not really usable as a standalone analysis (it is mainly
relevant when combined with the symbolic engine). So our idea is to provide a multi-colors and bit-level taint analysis based on
the semantics of the Triton IR instead of the instruction semantics or to make it independent of the AST


It has been almost seven months since Triton v0.7. There were a lot of performance
improvements regarding the execution speed and the memory consumption and we
cannot describe all of them in this blog post but are present in this new version.
(you can check them on this Github page).
We only highlighted the most notorious changes from the last version. We hope you find the many
features and improvements worth the wait. Now it’s time for you to give it a try.

Stay tuned for more news on Triton!


  • Thanks to all contributors!
  • Thanks to all our Quarkslab colleagues who proofread this article.
Original Source
Available for Amazon Prime