Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add alternative emulation through unicorn engine #57

Open
wants to merge 13 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 9 additions & 1 deletion .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,12 @@ jobs:
uses: actions/checkout@v2

- name: install packages
run: sudo apt update; sudo apt upgrade -y; sudo apt install -y build-essential ninja-build libglib2.0-dev libfdt-dev libpixman-1-dev zlib1g-dev python3-tables python3-pandas python3-prctl python3-json5
run: sudo apt update; sudo apt upgrade -y; sudo apt install -y build-essential ninja-build libglib2.0-dev libfdt-dev libpixman-1-dev zlib1g-dev python3-tables python3-pandas python3-prctl python3-json5 python3-pyelftools

- name: Install latest stable Rust toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: stable

- name: Checkout submodules
run: git submodule update --init
Expand All @@ -29,5 +34,8 @@ jobs:
- name: Build Faultplugin
run: cd faultplugin; make -j; echo "done"

- name: Build Emulation Worker
run: cd emulation_worker; cargo build --release; mv target/release/libemulation_worker.so ../emulation_worker.so; echo "done"

- name: Run ARCHIE
run: cd examples/stm32; ./run.sh; cd ../riscv64; ./run.sh
20 changes: 20 additions & 0 deletions .github/workflows/lint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,3 +24,23 @@ jobs:
- run: |
black --version
black --check --diff *.py analysis/*.py

clippy:
name: Clippy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
components: rustfmt, clippy
- uses: actions-rs/cargo@v1
with:
command: fmt
args: --all --manifest-path ./emulation_worker/Cargo.toml -- --check
- run: |
cd emulation_worker
git submodule update --init unicorn
cargo clippy -- -D warnings
3 changes: 3 additions & 0 deletions .gitmodules
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
[submodule "qemu"]
path = qemu
url = https://github.com/Fraunhofer-AISEC/archie-qemu.git
[submodule "emulation_worker/unicorn"]
path = emulation_worker/unicorn
url = https://github.com/unicorn-engine/unicorn.git
14 changes: 12 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,11 +40,15 @@ mkdir -p qemu/build/debug
cd qemu/build/debug
./../../configure --target-list=arm-softmmu --enable-debug --enable-plugins --disable-sdl --disable-gtk --disable-curses --disable-vnc
make -j {CPUCORENUMBER}
cd ../../../faultplugin/
cd -
cd faultplugin
make
cd emulation_worker
cargo build --release
cp target/release/libemulation_worker.so ../emulation_worker.so
```

With this, *archie-qemu* is build in qemu/build/debug/ and the plugin is build in *faultplugin/*
With this, *archie-qemu* is built in qemu/build/debug/, the plugin is built in *faultplugin/* and the unicorn emulation worker is built and moved to the project's root directory.
If you change the build directory for *archie-qemu*, please change the path in the [Makefile](faultplugin/Makefile) in the *faultplugin/* folder for building the plugin.

## In [archie](https://github.com/Fraunhofer-AISEC/archie)
Expand All @@ -58,6 +62,7 @@ tables (tested 3.6.1)
python-prctl (tested 1.6.1)
numpy (tested 1.17.4)
json (tested 2.0.9), or json5 (tested 0.9.6)
pyelftools (tested 0.29)
```
These python3 libraries can either be installed using your linux-distribution's installation method or by using pip3.
JSON5 is strongly recommended as it allows integers to be represented as hexadecimal numbers.
Expand Down Expand Up @@ -105,3 +110,8 @@ targ rem:localhost:1234
```
QEMU will wait unil the GDB session is attached. The debugging mode is only suitable for the analysis of a low number of faults. Stepping through a large amount of faults is cumbersome. This should be considered when adjusting the JSON files.

#### Unicorn Engine

Instead of QEMU, the unicorn engine can be used for emulating the experiments. This feature can be used interchangeably with the QEMU emulation without the need to adjust any of the configuration files. One exception for this are register faults, which have different target addresses between the two versions. The mapping for the registers can be looked up in the source code of unicorn's [Rust bindings](https://github.com/unicorn-engine/unicorn/tree/master/bindings/rust/src). To enable this feature the *--unicorn* flag can be set.

Using the unicorn engine can result in a substantial increase in performance. However, this mode is not capable of emulating any features related to the hardware of the target platform such as interrupts or communication with devices.
8 changes: 7 additions & 1 deletion build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,12 @@ cd ../../../faultplugin/
make clean && make
cd ..

echo "Building emulation worker"
cd emulation_worker
cargo build --release
cp target/release/libemulation_worker.so ../emulation_worker.so
cd -

echo "Test ARCHIE"
cd examples/stm32
./run.sh
Expand All @@ -112,4 +118,4 @@ select yn in "YES" "NO"; do
esac
echo "Please type the number corresponding to Yes or No"
done
echo "Archie was build and tested successfully"
echo "Archie was built and tested successfully"
94 changes: 73 additions & 21 deletions controller.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@
import subprocess
import time

from elftools.elf.elffile import ELFFile

try:
import json5 as json

Expand All @@ -21,7 +23,7 @@
pass

from faultclass import Fault, Trigger
from faultclass import python_worker
from faultclass import python_worker, python_worker_unicorn
from hdf5logger import hdf5collector
from goldenrun import run_goldenrun

Expand Down Expand Up @@ -245,12 +247,13 @@ def controller(
num_workers,
queuedepth,
compressionlevel,
qemu_output,
engine_output,
goldenrun=True,
logger=hdf5collector,
qemu_pre=None,
qemu_post=None,
logger_postprocess=None,
unicorn_emulation=False,
):
"""
This function builds the unrolled fault structure, performs golden run and
Expand All @@ -271,26 +274,49 @@ def controller(

# Storing and restoring goldenrun_data with pickle is a temporary fix
# A better solution is to parse the goldenrun_data from the existing hdf5 file
pregoldenrun_data = {}
goldenrun_data = {}
if goldenrun:
[
config_qemu["max_instruction_count"],
pregoldenrun_data,
goldenrun_data,
faultlist,
] = run_goldenrun(
config_qemu, qemu_output, queue_output, faultlist, qemu_pre, qemu_post
config_qemu, engine_output, queue_output, faultlist, qemu_pre, qemu_post
)
pickle.dump(
(config_qemu["max_instruction_count"], goldenrun_data, faultlist),
(
config_qemu["max_instruction_count"],
pregoldenrun_data,
goldenrun_data,
faultlist,
),
lzma.open("bkup_goldenrun_results.xz", "wb"),
)
else:
(
config_qemu["max_instruction_count"],
pregoldenrun_data,
goldenrun_data,
faultlist,
) = pickle.load(lzma.open("bkup_goldenrun_results.xz", "rb"))

if unicorn_emulation:
elffile = ELFFile(open(config_qemu["kernel"], "rb"))
for segment in elffile.iter_segments():
if segment["p_type"] == "PT_LOAD":
segment_data = segment.data()
pregoldenrun_data["memdumplist"].append(
{
"address": segment["p_vaddr"],
"len": len(segment_data),
"numpdumps": 1,
"dumps": [list(segment_data)],
}
)
break

p_logger = Process(
target=logger,
args=(
Expand Down Expand Up @@ -341,22 +367,39 @@ def controller(
faults = faultlist[itter]
itter += 1

p = Process(
name=f"worker_{faults['index']}",
target=python_worker,
args=(
faults["faultlist"],
config_qemu,
faults["index"],
queue_output,
qemu_output,
goldenrun_data,
True,
queue_ram_usage,
qemu_pre,
qemu_post,
),
)
if unicorn_emulation:
p = Process(
name=f"worker_{faults['index']}",
target=python_worker_unicorn,
args=(
faults["faultlist"],
config_qemu,
faults["index"],
queue_output,
engine_output,
pregoldenrun_data,
goldenrun_data,
True,
),
)
else:
p = Process(
name=f"worker_{faults['index']}",
target=python_worker,
args=(
faults["faultlist"],
config_qemu,
faults["index"],
queue_output,
engine_output,
goldenrun_data,
True,
queue_ram_usage,
qemu_pre,
qemu_post,
),
)

p.start()
p_list.append({"process": p, "start_time": time.time()})

Expand Down Expand Up @@ -498,6 +541,12 @@ def get_argument_parser():
help="Enables connection to the target with gdb. Port 1234",
required=False,
)
parser.add_argument(
"--unicorn",
action="store_true",
help="Enables emulation through unicorn engine instead of QEMU",
required=False,
)
return parser


Expand Down Expand Up @@ -526,6 +575,8 @@ def process_arguments(args):
if args.compressionlevel is None:
parguments["compressionlevel"] = 1

parguments["unicorn_emulation"] = args.unicorn

hdf5file = Path(args.hdf5file)
if hdf5file.parent.exists() is False:
print(
Expand Down Expand Up @@ -623,10 +674,11 @@ def process_arguments(args):
parguments["num_workers"], # num_workers
parguments["queuedepth"], # queuedepth
parguments["compressionlevel"], # compressionlevel
args.debug, # qemu_output
args.debug, # engine_output
parguments["goldenrun"], # goldenrun
hdf5collector, # logger
None, # qemu_pre
None, # qemu_post
None, # logger_postprocess
parguments["unicorn_emulation"], # enable unicorn emulation
)
70 changes: 70 additions & 0 deletions emulation_worker/.github/workflows/CI.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
name: CI

on:
push:
branches:
- main
- master
pull_request:
workflow_dispatch:

jobs:
linux:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: PyO3/maturin-action@v1
with:
manylinux: auto
command: build
args: --release --sdist -o dist --find-interpreter
- name: Upload wheels
uses: actions/upload-artifact@v3
with:
name: wheels
path: dist

windows:
runs-on: windows-latest
steps:
- uses: actions/checkout@v3
- uses: PyO3/maturin-action@v1
with:
command: build
args: --release -o dist --find-interpreter
- name: Upload wheels
uses: actions/upload-artifact@v3
with:
name: wheels
path: dist

macos:
runs-on: macos-latest
steps:
- uses: actions/checkout@v3
- uses: PyO3/maturin-action@v1
with:
command: build
args: --release -o dist --universal2 --find-interpreter
- name: Upload wheels
uses: actions/upload-artifact@v3
with:
name: wheels
path: dist

release:
name: Release
runs-on: ubuntu-latest
if: "startsWith(github.ref, 'refs/tags/')"
needs: [ macos, windows, linux ]
steps:
- uses: actions/download-artifact@v3
with:
name: wheels
- name: Publish to PyPI
uses: PyO3/maturin-action@v1
env:
MATURIN_PYPI_TOKEN: ${{ secrets.PYPI_API_TOKEN }}
with:
command: upload
args: --skip-existing *
Loading