Skip to content

Commit

Permalink
Improve the documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
kif committed Sep 21, 2021
1 parent b3d917c commit 9e2e316
Show file tree
Hide file tree
Showing 3 changed files with 25 additions and 8 deletions.
16 changes: 16 additions & 0 deletions doc/source/coverage.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
Test coverage report for dahu
=============================

Measured on *dahu* version 1.0.0, 21/09/2021

.. csv-table:: Test suite coverage
:header: "Name", "Stmts", "Exec", "Cover"
:widths: 35, 8, 8, 8

"factory.py", "86", "29", "33.7 %"
"job.py", "330", "76", "23.0 %"
"plugin.py", "101", "51", "50.5 %"
"utils.py", "45", "4", "8.9 %"
"plugins/example.py", "35", "24", "68.6 %"

"dahu total", "597", "184", "30.8 %"
16 changes: 8 additions & 8 deletions doc/source/dahu.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,20 +6,20 @@ Dahu: online data analysis server
The *dahu* server executes **jobs**:
------------------------------------

* Each job lives in its own thread (yes, thread, not process, it the plugin developper to ensure the work he is doing is GIL-compliant).
* Each job executes one plugin, provided by the plugin developper (i.e. the scientist)
* Each job lives in its own thread (yes, thread, not process, it the plugin's developper to ensure the work he is doing is GIL-compliant).
* Each job executes one plugin, provided by the plugin's developper (i.e. the scientist)
* The job de/serialises JSON strings coming from/returning to Tango
* Jobs are executed asynchronously, the request for calculation is answered instantaneously with a *jobid*.
* The *jobid* can be used to poll the server for the status of the job or for manual synchronization (Tango can time-out!).
* The *jobid* can be used to poll the server for the status of the job or for manual synchronization (mind that Tango can time-out!).
* When jobs are finished, the client is notified via Tango events about the status
* Results can be retrieved after the job has finished.

Jobs execute **plugin**:
------------------------

* Plugins are written in Python
* Plugins can be classes or simple function
* The input and output must be JSON-seriablisable as simple dictionnaries
* Plugins are written in Python (extension in Cython or OpenCL are common)
* Plugins can be classes or simple functions
* The input and output MUST be JSON-seriablisable as simple dictionnaries
* Plugins are dynamically loaded from Python modules
* Plugins can be profiled for performance analysis

Expand All @@ -30,12 +30,12 @@ All jobs can be run offline using the `dahu-reprocess` command line tool.
This tool is not multithreaded and plugins are directly run, it is intended for:

* offline developments
* re-processing some failed online processing.
* re-processing some failed online processing (where performances are less critical).

Dahu is light !
---------------

Dahu is a small project started at ESRF in 2013 with less than 1000 lines of code.
It is used in production since then on a couple of beamlines.
With its FIFO scheduler, dahu is very fast (1µs, 0.3ms from Tango)
With its FIFO scheduler, `dahu` is very fast (1µs locally, 0.3ms from Tango)

1 change: 1 addition & 0 deletions doc/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ Contents:

dahu
installation
coverage
api/modules

Indices and tables
Expand Down

0 comments on commit 9e2e316

Please sign in to comment.