top of page

Transcription function
RIGDOCKS-SILVER-1.4.0 and later versions include three ggml models provided by OpenA
モデル名文字起こし精度処理速度ggml-tiny.bin△◎ggml-small.bin〇〇ggml-medium.bin◎△The included models are displayed under "Expansion" > "RIGDOCKS" > "SILVER" > "Transcription".
The checked model will be used by default for transcription.

You can change the default by selecting a different model.

Default settings are saved even when you close Reaper.
RIGDOCKS-SILVER-1.4.0 and later versions include three ggml models provided by OpenAI社.
The model that is checked will be used by default for transcription.
The model that is checked will be used by default for transcription.

The included models will appear under “Extensions” > “RIGDOCKS” > ‘SILVER’ > “Transcription”

The model that is checked will be used by default for transcription.

The included models will appear under “Extensions” > “RIGDOCKS” > ‘SILVER’ > “Transcription”

The included models will appear under “Extensions” > “RIGDOCKS” > ‘SILVER’ > “Transcription”

RIGDOCKS-SILVER-1.4.0 and later versions include three ggml models provided by OpenAI社.
The model that is checked will be used by default for transcription.

The included models will appear under “Extensions” > “RIGDOCKS” > ‘SILVER’ > “Transcription”
You can change the default by selecting a different model
The default settings are saved even after you close Reaper.

The included models will appear under “Extensions” > “RIGDOCKS” > ‘SILVER’ > “Transcription”
The default settings are saved even after you close Reaper.

The included models will appear under “Extensions” > “RIGDOCKS” > ‘SILVER’ > “Transcription”
The default settings are saved even after you close Reaper.

Executing the RIGDOCKS API with ReaScript
*Please check APIDOCK for detailed specifications of each API.
1: Load the inference model using AZ_TRSC_LoadModel.
a: When using the default model settings
Do not enter any arguments.

b: When using models in the model folder
Specify only the file name using the argument `modelPath`.

c: When using a model outside the model folder
Specify the file path using the argument `modelPath`.

2: Perform transcription
a: When transcribing the target media by dividing it into contexts.
Use the AZ_TRSC_Segments...... API.

b: When transcribing without dividing the target media.
Use the AZ_TRSC_Full...... API.

3: Unload the inference model using AZ_TRSC_ReleaseModel.

bottom of page
