Welcome to offline-inspect’s documentation!¶
Installation¶
Install the Python 3 package directly from github. To be able to do this, you have to install git first. Additionally, you need a Python 3 installation- We recommend Anaconda, especially if you are a beginner with Python. If you have an issue with installing it, chat us up on our slack channel.
After these two general requisites have been installed, you can download and install the package from your terminal (e.g. git bash, linux bash or windows command prompt or anaconda prompt) as follows
git clone https://github.com/translationalneurosurgery/tool-offspect.git
cd tool-offspect
pip install -r requirements.txt
pip install -e .
If you have a fresh Anaconda Installation, this should work out of the box. Otherwise, try pip install wheel
first, as we might install some wheels with some precompiled libraries. Another solution to issues is setting up a fresh environment. You can do so with
pip install virtualenv
virtualenv .env
source .env/bin/activate # on linux or mac
.env\Scripts\activate # on windows
pip install -r requirements.txt
pip install -e .
# at a later stage, you can deactivate the environment with
deactivate
Updating¶
You can update the offline-inspection tool from the project root with
git pull
pip install -r requirements.txt
pip install -e .
Content¶
Architecture¶
This package consists of several modules and applications.
The first frontier will be a set of command-line tools which takes a set of files (with exact number and type depending on the file-format and prepares an event-file.
This will be implemented for various file-formats.
Follow the workflow to prepare the event-file and populates the traces-file. This is mainly a command-line-interface.
Additionally, there is a Python API to load CacheFiles, and which allow manipulation of the metadata for each trace, and read-only access to global metadata and the data for each trace.
CacheFile¶
The python interface to the CacheFile
which checks for filename
validity during instantiation. When one of its properties are called, it loads
and parses the metadata and datasets fresh from the hdf5 and aggregatates them.
Examples
Peek¶
The most straightforward example would be loading a CacheFile and printing its content.
from offspect.api import CacheFile
cf = CacheFile("example.hdf5")
print(cf)
Iterate¶
Another use case would be printing a TraceAttribute across all traces in the file, using the iterator interface of the CacheFile, which returns data and attributes of each Trace.
from offspect.api import CacheFile
cf = CacheFile("example.hdf5")
for data, attrs in cf:
print("rejected?:", attrs["reject"])
Manipulate¶
We can change the value for a key of the annotations for a specific trace by indexing CacheFile.get_trace_attrs()
with a specific index. Please note that we now decode and encode the values of the attrs. This is because they are stored as string in the CacheFile, but we need them in their respective type to manipulate them properly. Additionally, we encode them, before we set the attributes again with set_trace_attrs()
.
from offspect.api import CacheFile, encode
cf = CacheFile("example.hdf5")
attrs = cf.get_trace_attrs(0)
attrs["stimulation_intensity_mso"] = encode(66)
cf.set_trace_attrs(0, attrs)
Batch-Manipulate¶
Another typical use case would be changing one TraceAttribute across all traces in the file. Here, we iterate across all traces, and shift the onset of the TMS 5 samples to the right.
from offspect.api import CacheFile, decode, encode
cf = CacheFile("merged.hdf5")
for ix, (data, attrs) in enumerate(cf):
key = "onset_shift"
old = decode(attrs[key])
print(f"Trace {ix} {key}:", old, end=" ")
new = old + 5
attrs["onset_shift"] = encode(new)
cf.set_trace_attrs(ix, attrs)
test = decode(cf.get_trace_attrs(ix)["onset_shift"])
print("to", test)
Plotting¶
Eventually, and ideally after visual inspection, you might want to plot the resulting map. You can do so with using plot_map()
, as in the following example.
from offspect.api import plot_map, CacheFile
# we load a cachefile
cf = CacheFile("example.hdf5")
# and plot and show it.
display = plot_map(cf)
display.show()
# you can also save the figure with
display.savefig("example_map.png")
There is a variety of options to tune the plotting to your whims. For example, you can normalize the values, e.g. by taking the logarithm or thresholding by giving the foo argument a sensible Callable. Note that we add 1 to be able to deal with a Vpp of 0 from e.g. MEP-negative traces.
from math import log10
# taking the log10
plot_map(cf, foo = lambda x : log10(x + 1))
# thresholding
def threshold(x):
return float(x>50)
plot_map(cf, foo = threshold)
Additionally, you can use all the keywords from plot_glass()
to beautify your plot.
plot_map(cf, vmax=100, title="Example", smooth=25)
-
class
CacheFile
(fname)[source] instantiate a new cachefile from HDD
- Parameters
fname (FileName) – path to the file
each readout, a specific set of fields must be in the metadata of a trace. Whenever attributes are read or written, the validity of the metadata will automatically be checked to be consistent with its 'readout'. (For) –
-
get_trace_attrs
(idx)[source] read the TraceAttributes for a specific traces in the file
- Parameters
idx (int) – which trace to pick.
- Returns
attrs (TraceAttributes) – the collapsed attributes for this trace.
Example:
cf = CacheFile("example.hdf5") for i in len(cf): attrs = cf.get_trace_attrs(i)
Note
The TraceAttributes contain the metadata of this trace, and the metadata of its parent group, i.e. sourcefile. Additionally, two fields will be added, containing information about the ‘cache_file’ and the ‘cache_file_index’. The number of fields is therefore larger than the number of fields valid for TraceAttributes according to
filter_trace_attrs()
. This is no problem, because when you update withset_trace_attrs()
, these fields will be used for safety checks and subsequently discarded.
-
get_trace_data
(idx)[source] return TraceData for a specific traces in the file
- Parameters
idx (int) – which trace to pick.
- Returns
attrs (TraceData) – the date stored for this trace.
.. note:: – This is a read-only attribute, and raw data can never be overwritten with the CacheFile interface. If you need to perform any preprocessing steps, manage the TraceData with a low-level interface, e.g.
populate()
.
-
property
origins
returns a list of original files used in creating this cachefile
-
set_trace_attrs
(idx, attrs)[source] update the attributes of a specific trace
- Parameters
idx (int) – at which index to overwrite
attrs (TraceAttributes) – with which attributes to overwrite
Example:
import datetime now = str(datetime.datetime.now()) cf = CacheFile("example.hdf5") attrs = cf.get_trace_attrs(0) attrs["comment"] = now cf.set_trace_attrs(0, attrs)
Note
Because we expect the TraceAttributes to originate from a CacheFiles
get_trace_attrs()
method, we expect them to have information about their original file and index included. For safety reasons, you have to specify the index when calling this setter. Additionally, the original file must be this instance of CacheFile. If you want to directly overwrite an arbitrary attribute without this safety checks, update the values for original_file and original_index and useupdate_trace_attributes()
. Additionally, please note that whileget_trace_attrs()
returns a complete dictionary of attributes, including thise that apply to the whole group or origin file, only valid fields for trace metadata will be saved, i.e. those fields which are in correspondence with the “readout” parameter (seefilter_trace_attrs()
).
Inputs¶
Different assessments, e.g. TMS Mappings, NMES ERP, or H-Wave assessments,
have different readouts (see Developing Readouts), and therefore different information needs to be stored in the CacheFile
(see Annotation Fields).
Also, in the course of the last years, we used different protocols for the same readouts. Each protocol might use different file-formats, different number of files, or even within one file, different internal structures.
Therefore, we face different challenges for each implementation of each protocol. In the following paragraphs these challenges are described.
Smartmove robotic¶
These recordings come from the smartmove robotic TMS. This input format uses three files:
.cnt
for EEG.cnt
for EMG.txt
for Coordinates
Note
Load the TraceData
with load_ephys_file()
and the Coords
with load_documentation_txt()
Data¶
EEG and EMG data is stored in the native file-format of the eego recording software. It can be loaded with libeep. During robotic TMS, the 64 EEG channels and the 8 EMG channels are stored in
separate .cnt
files.
Coordinates¶
During the mapping procedure, the coordinates of the target positions, i.e. where the robot will be moved to, are saved in a documentation.txt
-file. Please note that these are the targets for the robotic movement,
recorded, not the actual coordinates of stimulation. The actual coordinates at the time of stimulation do not appear to have been stored at all.
Documentation.txt
The file documentation.txt stores the coordinates of each target the robot arm moved to. It does not contain information regarding manual adjustments (i.e. adjusting distance of coil to the head) or the actual coil position at the time of stimulation. Target coordinates are given in medical image coordinates (CT / 3D image).
Target counter: Counts the number of successfully reached targets, including this one.
Target number: Point number in the total list of all targets.
Target label: The ‘name’ of the target point. Usually the same as the target number.
X-vector [<m11> <m21> <m31>]
Y-vector [<m12> <m22> <m32>]
Z-vector [<m13> <m23> <m33>]
Position [<x> <y> <z>]
Date & time point [dd.mm.yy hh:mm:ss]
Experiment name [always ‘TMS exp’]
Subject ID [NnVv]
The coordinates of the targets are stored in one or multiple targets_*.sav
-files in xml format. The filename of this save
file encodes experiment, subject pseudonym, date and hour, e.g.:
targets_<experiment>_<VvNn>_20190603_1624.sav
. These coordinates are the e.g. the grid of targets predefined before starting the mapping.
The file success.txt stores the coordinates of only the last target the robot arm moved to. The first line reads ‘success’ (move ended at desired position), ‘start’ (move started but not ended) or ‘fail’ (move ended before reaching the target due to error). The second line contains the timestamp of when the status was updated. For line 4 to 10, same notation as in documentation.txt.
The file target_TMS_exp_[NnVv]_[yyyymmdd_hhmm] stores the coordinates of all created targets. It contains the position (<x>, <y> and <z>), matrix operations (<m11>, <m12>… until <m33>) and target label (<label>), each labeled as such.
Module Content¶
-
cut_traces
(cntfile, annotation)[source] cut the tracedate from a matfile given Annotations :type cntfile:
Union
[str
,Path
] :param cntfile: the cntfile for cutting the data. must correspond in name to the one specified in the annotation :type cntfile: FileName :type annotation:Dict
[str
,Any
] :param annotation: the annotations specifying e.g. onsets as well as pre and post durations :type annotation: Annotations- Returns
traces (List[TraceData])
-
prepare_annotations
(docfile, cntfiles, channel, pre_in_ms, post_in_ms, select_events=['0001'])[source] load a documentation.txt and cnt-files and distill annotations from them
- Parameters
docfile (FileName) – the documentation.txt with the target coordintes
cntfiles (List[FileName]) – a list of the
.cnt
-file with the EEG data and triggers and the the.cnt
-file with the EMG datachannel (str) – which channel to pick
pre_in_ms (float) – how many ms to cut before the tms
post_in_ms (float) – how many ms to cut after the tms
- Returns
annotation (Annotations) – the annotations for this origin files
XDF based protocols¶
This kind of file format is our preferred file format. It is open-source, well-defined and extensible and has pxdf to load it with Python. You will need one file.
.xdf
Data¶
Because LabRecorder can record multiple streams into a single .xdf
-file. These files can contain therefore not only EEG and EMG, but also e.g. pupilometric data, respiration effort, grip force, and many more. As it allows to record multiple streams, it also offers the option to record coordinates (as e.g. sent with every pulse from localite version 4.0) together with the raw data (as sent e.g. by eego or bvr) and additional markers.
Coordinates¶
In the optimal case, the .xdf
-file contains already sufficient information about the coordinates, and pairing is automatic. Yet, there will be some .xdf
-files, where not all streams were recorded. This might have happened e.g. due to errors in the recording script, an erroneous automated recording, or during manual recording with LabRecorder. In these cases, information about coordinates or other markers can be missing. The pairing of coordinates with a specific trace needs to be reconstructed manually (see Manually linking the coordinates).
If multiple protocols were recorded in one xdf
-file, as often happened during manual recording, we will have hundreds of stimuli. Worse, it can be that even marker-streams are missing, and there is no information when a protocol started within the long recording. Linking them to the correct coordinates is tricky, and the best chance is probably taking account of the relative latency between subsequent stimuli.
-
cut_traces
(xdffile, annotation)[source] cut the tracedate from a matfile given Annotations :param xdfile: the xdffile for cutting the data. must correspond in name to the one specified in the annotation :type xdfile: FileName :type annotation:
Dict
[str
,Any
] :param annotation: the annotations specifying e.g. onsets as well as pre and post durations :type annotation: Annotations- Returns
traces (List[TraceData])
-
prepare_annotations
(xdffile, channel, pre_in_ms, post_in_ms, xmlfile=None, event_name='coil_0_didt', event_stream='localite_marker', comment_name=None)[source] load a documentation.txt and cnt-files and distill annotations from them
- Parameters
xmlfile (FileName) – an option xml file with information about the target coordinates
readout (str) – which readout to use
channel (str) – which channel to pick
pre_in_ms (float) – how many ms to cut before the tms
post_in_ms (float) – how many ms to cut after the tms
xdffile (FileName) – the
.xdf
-file with the recorded streams, e.g. data and markers
- Returns
annotation (Annotations) – the annotations for this origin files
Manually linking the coordinates¶
A big challenge is when coordinates have to be linked with each stimulus manually. This can be done using e.g the labnotes, and would require manual entry during visual inspection. We cansupport this reconstruction using independently stored coordinate positions.
In our lab, this means the xml
-files stored by localite. When e.g. a
predefined grid was used, this file contains the xyz-coordinates of the target
positions. Usually, a grid with 6x6 nodes was used, and each target was
stimulated 5 times. This would have resulted in 180 stimuli and 36 target
coordinates. In these cases, we can attempt to prepopulate the
CacheFile
with this information.
Annotation Fields¶
Every CacheFile has one or more original files. Each file comes with original annotations stored in its MetaData. Additionally, each TraceData within its own set of annotations. There is a set of fields common to all OriginAnnotations and TraceAnnotations, but they also have additional fields which depend on the type of readin / readout used.
Generic MetaData fields implemented in all readouts¶
-
valid_trace_keys
= ['id', 'event_name', 'event_sample', 'event_time', 'onset_shift', 'time_since_last_pulse_in_s', 'reject', 'comment', 'examiner'] information contained in every trace, regardless of readout
-
can_vary_across_merged_files
= ['global_comment', 'filedate'] information about the origin file
-
must_be_identical_in_merged_file
= ['channel_labels', 'channel_of_interest', 'samples_post_event', 'samples_pre_event', 'samplingrate', 'subject', 'readout', 'readin', 'version'] must be identical across original files merged into this cachefile
Contralateral MEPs induced by single pulse TMS¶
This is a readout for a single channel EMG response evoked by single pulse transcranial magnetic stimulation. As this will be a single-channel-readout with a relatively clear waveform, we store only the magnitude of the first negative and positive peak and the zero-crossing latency.
-
valid_keys
= ['stimulation_intensity_mso', 'stimulation_intensity_didt', 'neg_peak_magnitude_uv', 'neg_peak_latency_ms', 'pos_peak_magnitude_uv', 'pos_peak_latency_ms', 'zcr_latency_ms', 'xyz_coords'] valid keys for tms-cmep (formerly know as ‘contralateral-mep’)
Ipsilateral MEPs induced by single pulse TMS¶
This is a readout for a single channel EMG response evoked by single pulse transcranial magnetic stimulation. As this will be a single-channel-readout with a relatively clear waveform, we store only the magnitude of the first negative and positive peak and the zero-crossing latency.
-
valid_keys
= ['stimulation_intensity_mso', 'stimulation_intensity_didt', 'imep_occurence', 'imep_latency', 'imep_magnitude', 'estimation_method', 'xyz_coords', 'channel_of_interest'] valid keys for tms-imep
ERPs induced by single pulse TMS¶
This is a readout for EEG responses evoked by single pulse transcranial magnetic stimulation. As this will be a multi-channel-readout, there would be a tremendous amount of peaks and latencies. Therefore, we only store the GMFP trace results in the metadata.
-
valid_keys
= ['stimulation_intensity_mso', 'stimulation_intensity_didt', 'xyz_coords', 'gmfp_neg_peaks_magnitude_uv', 'gmfp_neg_peaks_latency_ms', 'gmfp_pos_peaks_magnitude_uv', 'gmfp_pos_peaks_latency_ms', 'gmfp_zcr_latencies_ms'] valid keys for tms-erp
Workflow¶
The workflow is described in the figure below. There are several use cases for different protocols of data recording. Every protocol follows the same workflow. You select a specific set of source files, and initialize a new CacheFile from these files (see Initialization). Afterwards, you can visually inspect the CacheFile (see Visual Inspection). Obviously, the package needs to be installed first (see Installation).
![digraph G{
compound=true;
# rankdir = LR;
# splines=ortho
subgraph cluster0 {
label = "init\nfrom different fileformats"
axdf[label="Automated.xdf"]
nxdf[label="Manual.xdf"]
smartmove[label="smartmove TMS"]
mat[label="Matlab Protocol"]
}
subgraph cluster1{
rank = 1
Annotations
TraceData
}
subgraph cluster2{
rank = 1
ForkedAnnotations
ForkedTraceData
}
cache[label="CacheFile"]
cache1[label="second CacheFile\nfrom same subject"]
merged[label="merged CacheFile"]
merge[shape="circle", color=magenta]
cache -> merge[color=magenta]
cache1 -> merge[color=magenta]
merge -> merged
smartmove -> Annotations[ltail=cluster0] [label="new", color="blue"];
Annotations -> TraceData [label="cut", color="blue"];
TraceData -> cache[ltail=cluster1, lhead=cluster2] [color="blue"];
cache -> ForkedAnnotations [lhead=cluster1] [label="fork", color="magenta"];
ForkedAnnotations -> ForkedTraceData [label="cut", color="blue"];
cache -> cache [label="inspect", color=blue]
cache1 -> cache1 [label="inspect", color=blue]
}](_images/graphviz-c74ec49c8d15a70d20ca5bf663c7c1657240c8ab.png)
Initialization¶
Initialization is a two-step process. First, annotations are created. Second, these annotations are used to cut the recorded data for the desired channel into Traces. The CacheFile created in this fashion can then be visually inspected. All of this happens under the hood and this separation is only important to later allow easier forking.
To create a CacheFile for one of our many doc:input, you have to open a terminal, e.g. Linux bash, Git bash on Windows or the windows command prompt. You have to specify the files from which you convert, the file into which you convert and various parameters to select channels, pre-post duration etc. Find command line examples below. Note that a CacheFile always has the .hdf5
-suffix and is actually a file organized in the Hierarchical Data Format.
The basic command for converting TMS data to a CacheFile is offspect tms
, and you can get its signature and help with offspect tms -h
. Based on the files you hand it, the programm tries to automatically detect under which protocol they were recorded. This can fail - in that case post an Issue. If you have problems, feel free to chat with me on our slack. You can also browse through old issues, maybe your problem was already discussed. Consider reading also the general CLI documentation Command Line Interface.
-
cli_tms
(args)[source] Look at the CLI signature at Command Line Interface
Matlab protocol
Create a new CacheFile directly from the files for data and targets:
offspect tms -t test.hdf5 -f coords_contralesional.xml /map_contralesional.mat -pp 100 100 -r contralateral_mep -c EDC_L
Smartmove protocol
Peek into the source file for the eeg:
eep-peek VvNn_VvNn_1970-01-01_00-00-01.cnt
which tells you which events are in the cnt file. Here, we use the event 1
Create a new CacheFile using the file for targets, emg and eeg:
offspect tms -t test.hdf5 -f VvNn_VvNn_1970-01-01_00-00-01.cnt VvNn\ 1970-01-01_00-00-01.cnt documentation.txt -r contralateral_mep -c Ch1 -pp 100 100 -e 1
XDF protocol with localite stream
Convert directly from the source xdf file:
offspect tms -f mapping_contra_R004.xdf -t map.hdf5 -pp 100 100 -r cmep -c EDC_L
Fork¶
You can fork a new CacheFile by copying its annotations and applying it on a new source file. This can be used to create a CacheFile with the same annotations, e.g. timestamps of triggers, rejection flags etc, but for a different EMG channel.
Merge¶
You can also merge two CacheFiles together. This appends both source CacheFiles into a new CacheFile, and can be done recursively for multiple CacheFiles. The advantage lies in having only a single file for multiple source files, e.g. from multiple runs of the same measurement.
For the use case of visual inspection of contralateral single-channel MEPs after TMS, there exist a CLI and an API.
Visual Inspection¶
After you were able to create a cachefile, you can visually inspect it. To do so, start the GUI. You start the GUI also from the command-line, simply by typing offspect gui
. You can switch between different resolutions and setups of the GUI using the -r
parameter. Currently, LR, HR, and XS are implemented. Additionally, you can tell the GUI to immediatly load a file, sidestepping the initial manual picking of a CacheFile with -f
.
Again, consider reading also the general CLI documentation Command Line Interface. The GUI should be self-explanatory, but it certainly is in an early stage. If you have any issues or desire any new features or changes, post an Issue. If you have problems, chat with me or Ethan on our slack.
If you do not have a CacheFile,.but still want to try it out, you can follow the instruction in the paragraph on GUI test and development to mock a CacheFile.
API¶
The API is basd on the two functions cut_traces and prepare_annotations which have their specific implementation and function signature for each protocol. Look at the more in-depth documentation at Inputs and Full API documentation. Please do in the current developement stage of this package not expect anything in the API to be stable.
Development¶
Development can concern one of the four aspects of this package. Either the GUI, the CLI, the loadable fileformats or the readouts allowed in the cachefile.
Developing Readouts¶
Currently implemented readouts can be found at ALL_RIOS
, although not all recording protocols might be supported. Each readout comes with a definition of its TraceAttributes
, i.e.
a specific set of keys for the MetaData of each Trace. All keys should be restricted to str
. We recommend that each fields values are limited to str
, int
, float
, or List[str]
, List[int]
, List[float]
. They need to be stored in HDF5 which works best withs strings as keys and values. When annotations are being accessed, it sensible to decode()
and encode()
for safe parsing.
In general, it is the responsibility of the developer to add the respective list of valid keys for each readin / readout combination.
To allow for a clear organization, please put each protocol handlers in the input folder. There, each readin has its readout folder. In this lowest level, the handlers for the protocols are defined in their own modules, while the valid trace keys are defined in their __init__.py. See e.g. valid_keys
. Be aware that at least the valid_trace_keys
always need to be implemented during the prepare_annotation specific for this protocol. Please note also that across merged files the TraceAnnotation values for some keys must_be_identical_in_merged_file
, while others can_vary_across_merged_files
.
GUI test and development¶
Mock two xdf5 files using python tests/mock/cache.py fname1.hdf5 fname2.hdf5
from the project root. Merge these two files with offspect merge -f fname1.hdf5 fname2.hdf5 -t merged.hdf5
. Peek into the merged file with offspect peek merged.hdf5
. This should give you the following output:
-------------------------------------------------------------------------------
origin : template_R001.xdf
filedate : 1970-01-01 00:01:01
subject : VnNn
samplingrate : 1000
samples_pre_event : 100
samples_post_event : 100
channel_labels : ['EDC_L']
channel_of_interest : EDC_L
readout : cmep
readin : tms
global_comment : patient was tired
history :
version : 0.0.1
traces_count : 2
-------------------------------------------------------------------------------
origin : template_R002.xdf
filedate : 1970-01-01 23:59:59
subject : VnNn
samplingrate : 1000
samples_pre_event : 100
samples_post_event : 100
channel_labels : ['EDC_L']
channel_of_interest : EDC_L
readout : cmep
readin : tms
global_comment :
history :
version : 0.0.1
traces_count : 2
Start visual inspection using the GUI with this file with offspect gui -f merged.hdf5
or run offspect gui
and select the desired cachefile using the menu. You can also set the gui resolution, see Command Line Interface for more information.
Command Line Interface¶
offspect offers a command line interface. This interface can be accessed after installation of the package from the terminal, e.g. peek into a CacheFile with
offspect peek example.hdf5
offspect¶
usage: offspect [-h] {peek,merge,tms,gui,plot} ...
Create, manipulate and inspect cachefiles for offline inspection of evoked
potentials
positional arguments:
{peek,merge,tms,gui,plot}
peek peek into a cachefile and print essential information
merge merge two cachefiles into one
tms prepare cachefiles for a tms protocol
gui start the visual inspection GUI
plot plot the map for a cachefile
optional arguments:
-h, --help show this help message and exit
offspect peek¶
usage: offspect peek [-h] fname
positional arguments:
fname filename to peek into
optional arguments:
-h, --help show this help message and exit
offspect merge¶
usage: offspect merge [-h] -t TO -f SOURCES [SOURCES ...] [--verbose]
optional arguments:
-h, --help show this help message and exit
-t TO, --to TO filename to merge into. May not already exist
-f SOURCES [SOURCES ...], --from SOURCES [SOURCES ...]
<Required> list of files to merge
--verbose, -v be more verbose
offspect tms¶
usage: offspect tms [-h] -t TO -f SOURCES [SOURCES ...] -r READOUT -c CHANNEL
-pp PREPOST [PREPOST ...]
[-e SELECT_EVENTS [SELECT_EVENTS ...]]
optional arguments:
-h, --help show this help message and exit
-t TO, --to TO filename of the cachefile to be populated
-f SOURCES [SOURCES ...], --from SOURCES [SOURCES ...]
<Required> list of input files
-r READOUT, --readout READOUT
the desired readout, valid are: ['imep', 'cmep',
'erp']
-c CHANNEL, --channel CHANNEL
the desired channel
-pp PREPOST [PREPOST ...], --prepost PREPOST [PREPOST ...]
<Required> positional arguments of pre and post
duration
-e SELECT_EVENTS [SELECT_EVENTS ...], --events SELECT_EVENTS [SELECT_EVENTS ...]
<Required> select events, e.g. stream and name or
names depending on protocol
offspect gui¶
usage: offspect gui [-h] [-r RESOLUTION] [-f FILENAME]
optional arguments:
-h, --help show this help message and exit
-r RESOLUTION, --resolution RESOLUTION
Which resolution to use for the window. leave empty
for default, or set to LR or HR
-f FILENAME, --file FILENAME
Which file to load during startup
offspect plot¶
usage: offspect plot [-h] -f CFNAME [-t SFNAME] [--kwargs KWARGS]
optional arguments:
-h, --help show this help message and exit
-f CFNAME, --filename CFNAME
Which cachefile to plot
-t SFNAME, --figname SFNAME
The name of the imagefile to save the plot
--kwargs KWARGS A dictionary of additional keyword arguments to
finetune the plotting