lm_inference.py
Less than 1 minute
lm_inference.py
LM Decoding (conditional generation)
usage: lm_inference.py [-h] [--config CONFIG] [--log_level {CRITICAL,ERROR,WARNING,INFO,DEBUG,NOTSET}] --output_dir OUTPUT_DIR [--ngpu NGPU] [--seed SEED] [--dtype {float16,float32,float64}]
[--num_workers NUM_WORKERS] --data_path_and_name_and_type DATA_PATH_AND_NAME_AND_TYPE [--key_file KEY_FILE] [--allow_variable_data_keys ALLOW_VARIABLE_DATA_KEYS]
[--lm_train_config LM_TRAIN_CONFIG] [--lm_file LM_FILE] [--word_lm_train_config WORD_LM_TRAIN_CONFIG] [--word_lm_file WORD_LM_FILE] [--ngram_file NGRAM_FILE]
[--model_tag MODEL_TAG] [--quantize_lm QUANTIZE_LM] [--quantize_modules [QUANTIZE_MODULES ...]] [--quantize_dtype {float16,qint8}] [--batch_size BATCH_SIZE]
[--nbest NBEST] [--beam_size BEAM_SIZE] [--penalty PENALTY] [--maxlen MAXLEN] [--minlen MINLEN] [--ngram_weight NGRAM_WEIGHT] [--token_type {char,word,bpe,None}]
[--bpemodel BPEMODEL]