lame:- Install Lame:
sudo apt-get install lame - Convert:
lame -V2 output.wav output.mp3
lame: sudo apt-get install lamelame -V2 output.wav output.mp3 $ pip install piper-tts
Since you installed it via pip, you can use the piper command directly in your terminal. Because Piper is a "neural" text-to-speech engine, it doesn't come with voices built-in; you have to provide a model file (.onnx) and its configuration file (.json).
Here is the workflow to convert your text file:
If you haven't already, download a voice from the official repository. You need both files for a specific voice (e.g., en_US-lessac-medium.onnx and en_US-lessac-medium.onnx.json).
To convert a text file (let's call it input.txt) into an audio file (output.wav), use the following syntax:
piper --model /path/to/voice.onnx < input.txt --output_file output.wav
If your text file is named script.txt and your model is in the same folder, run:
cat script.txt | piper --model en_US-lessac-medium.onnx --output_file speech.wav
Adjusting Speed: Use the --length_scale flag. A higher number makes it slower, a lower number makes it faster.
Example (Slower): --length_scale 1.5
Adding Pause/Sentence Gap: Use the --sentence_silence flag to add seconds of silence between sentences.
Example: --sentence_silence 0.5
Batch Processing: If you want to listen to the file immediately without saving it, you can pipe it to a player like aplay (built-in to Ubuntu):
cat script.txt | piper --model en_US-lessac-medium.onnx --output-raw | aplay -r 22050 -f S16_LE -t raw
Quick Note: If you get a "command not found" error, your pip script folder might not be in your PATH. You can usually fix this by running python3 -m piper instead of just piper at the start of the command.
Do you have a specific voice or language in mind, or are you looking for recommendations on the best-sounding models?
Sorry, I open the page, but can not find en_US-lessac-medium.onnx and en_US-lessac-medium.onnx.json
# Download the .onnx model file
wget -O en_US-lessac-medium.onnx "https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_US/lessac/medium/en_US-lessac-medium.onnx?download=true"
# Download the .json config file
wget -O en_US-lessac-medium.onnx.json "https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_US/lessac/medium/en_US-lessac-medium.onnx.json?download=true"
# Download the .onnx model file (approx. 114MB)
wget -O en_GB-cori-high.onnx "https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_GB/cori/high/en_GB-cori-high.onnx?download=true"
# Download the .json config file
wget -O en_GB-cori-high.onnx.json "https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_GB/cori/high/en_GB-cori-high.onnx.json?download=true"
Once those are finished, use this command to convert your text file (e.g., input.txt) to a UK-voiced audio file:
cat input.txt | piper --model en_GB-cori-high.onnx --output_file uk_speech.wav
Here are the commands for some of the most popular British (UK) voice models available for Piper. Each model has a slightly different personality and quality level.
This is a standard, clear male British voice.
# Model
wget -O en_GB-alan-medium.onnx "https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_GB/alan/medium/en_GB-alan-medium.onnx?download=true"
# Config
wget -O en_GB-alan-medium.onnx.json "https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_GB/alan/medium/en_GB-alan-medium.onnx.json?download=true"
Despite being "low" quality, it is very fast and efficient, with a distinct southern accent.
# Model
wget -O en_GB-southern_english_female-low.onnx "https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_GB/southern_english_female/low/en_GB-southern_english_female-low.onnx?download=true"
# Config
wget -O en_GB-southern_english_female-low.onnx.json "https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_GB/southern_english_female/low/en_GB-southern_english_female-low.onnx.json?download=true"
The VCTK model is unique because it contains dozens of different British speakers in one file. You can switch between them using the --speaker flag (e.g., --speaker 0, --speaker 1, etc.).
# Model
wget -O en_GB-vctk-medium.onnx "https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_GB/vctk/medium/en_GB-vctk-medium.onnx?download=true"
# Config
wget -O en_GB-vctk-medium.onnx.json "https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_GB/vctk/medium/en_GB-vctk-medium.onnx.json?download=true"
Just swap the model name in your command. For example, to use the Alan voice:
cat temp03.txt | piper --model en_GB-alan-medium.onnx --output_file output_alan.wav
Since you've had issues with corrupted downloads before, you can run this quick "check" after downloading to see if the file is valid:
# This will show the size; Alan should be ~15MB, VCTK ~45MB
ls -lh *.onnx
If you ever want to try a Northern English accent, there is also a northern_english_male model available—just let me know and I'll grab those links for you!
Yes, please. Thank you!
The Northern English voice is a great choice—it has a very distinct, warm character compared to the standard "Received Pronunciation" (London-style) voices.
Here are the commands for the Northern English Male model (Medium quality):
# Download the .onnx model file (approx. 63MB)
wget -O en_GB-northern_english_male-medium.onnx "https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_GB/northern_english_male/medium/en_GB-northern_english_male-medium.onnx?download=true"
# Download the .json config file
wget -O en_GB-northern_english_male-medium.onnx.json "https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_GB/northern_english_male/medium/en_GB-northern_english_male-medium.onnx.json?download=true"
Use this command to hear the new accent:
cat temp03.txt | piper --model en_GB-northern_english_male-medium.onnx --output_file northern_uk.wav
Since you've downloaded several voices now, your folder might be getting a bit crowded. You can verify all your available models at once with:
ls -lh *.onnx
Common Model Sizes for Comparison:
Cori (High): ~114 MB
Northern English (Medium): ~63 MB
Alan (Medium): ~15 MB
If any of these files show a size of only a few KB, they didn't download correctly—just delete that specific file and run the wget command for it again.
Enjoy the new voice! It’s one of the more unique ones in the British collection.
How to find files?
du -sk ./* | sort -nr
or
du -sk ./* | grep -i "keyword" | sort -nr
or
$ find . -maxdepth 1 -iname "*keyword*" -type f -printf "%T+ %p\n" | sort -r
or
$ find . -maxdepth 1 -iname "*keyword*" -type f -exec du -sk {} + | sort -nr
How to find directories?
$ du -sk ./*/ | sort -nr
or
$ du -sk ./*/ | grep -i "keyword" | sort -nr
or
$ find . -maxdepth 1 -iname "*keyword*" -type d -not -path '.' -printf "%T+ %p\n" | sort -r
or
$ find . -maxdepth 1 -iname "*keyword*" -type d -not -path '.' -exec du -sk {} + | sort -nr
find, du, sort, and head commands. . with the specific directory path if you want to search a different location: find . -type f -exec du -Sh {} + | sort -rh | head -n 10
find .: Starts the search from the current directory (.) and traverses all subdirectories recursively.-type f: Restricts the search to regular files only, ignoring directories, links, etc..-exec du -Sh {} +: Executes the du command on the found files.du: Estimates file space usage.-S:
Reports the size of individual files, not including the size of
subdirectories (important for listing individual files correctly).-h: Displays sizes in a human-readable format (e.g., 10MB, 2G).{} and +: These are used by find to pass multiple filenames to a single du command efficiently.|: Pipes the output of the find command to the sort command.sort -rh: Sorts the output lines.-r: Reverses the sort order, placing the largest files at the top.-h: Compares human-readable numbers correctly (e.g., understands that 2G is larger than 10M).| head -n 10: Displays only the first 10 lines of the sorted output (the top 10 largest files). find . -type f -printf "%s %p\n" | sort -nr | head -n 10
%s) followed by the file path (%p), sorts numerically (-n) in reverse order (-r), and shows the top 10 results. ncdu tool, which is available in the Ubuntu package manager. ncdu: sudo apt install ncdu.ncdu . (or a specific path, e.g., ncdu /var/log). This provides an interactive interface to navigate directories and see the largest files and folders. The short answer is yes, absolutely. In fact, most developers would argue that you should exclude them.
Think of a virtual environment like a specialized toolkit you assembled for a specific project. While the tools inside are necessary to get the job done, you don't need to pack the entire heavy toolbox when you move house—you just need the shopping list so you can buy those same tools again at your new place.
venv/Size: Virtual environments can contain thousands of files and take up hundreds of megabytes. Backing them up makes your backups slow and bloated.
Portability: Virtual environments are "brittle." They often contain hardcoded paths to your specific machine's Python interpreter. If you restore a venv/ folder to a different computer (or even a different folder on the same computer), it will likely break.
Redundancy: All the libraries inside (like Pandas, Flask, or Requests) already exist on the internet (PyPI). There's no need to pay for storage to keep a second copy of them.
Instead of backing up the folder, you should back up the metadata required to recreate it. This keeps your backup light and functional.
Generate the list: Before backing up, run this command inside your project:
pip freeze > requirements.txt
Back up the text file: Ensure requirements.txt is included in your backup.
Restore when needed: If you ever need to restore your project, simply create a fresh environment and run:
pip install -r requirements.txt
To make your backup strategy efficient, here is a quick guide:
| Category | Include in Backup? | Examples |
| Source Code | Yes | .py files, README, config files |
| Dependencies | No | venv/, env/, node_modules/ |
| Metadata | Yes | requirements.txt, pyproject.toml |
| Build Artifacts | No | __pycache__/, .pyc files, dist/ |
Pro-tip: if you use Git for your development, ensure you have a .gitignore file in your root directory that includes venv/. This ensures you don't accidentally push those bulky files to GitHub or GitLab either.
Would you like me to generate a standard .gitignore template for your Python projects?
I back up my dev/ directory with rsync. Could you show me the rsync command, which exclude dependencies and build artifacts, please?
Since you are using rsync, you can use the --exclude flag to skip those bulky, non-essential directories. This keeps your backup slim and avoids the "broken path" issues that come with moving virtual environments.
rsync CommandYou can run a single command that targets the most common Python clutter:
rsync -avz --exclude='venv/' --exclude='env/' --exclude='__pycache__/' --exclude='.pytest_cache/' --exclude='*.pyc' /path/to/source/dev/ /path/to/destination/backup/
-a (archive): This is a "super-flag" that preserves permissions, symlinks, and timestamps. It also turns on recursion.
-v (verbose): Shows you exactly which files are being copied in real-time.
-z (compress): Compresses file data during the transfer (great if you're backing up to a network drive or remote server).
--exclude: Tells rsync to ignore specific patterns. Note that venv/ (with the trailing slash) ensures it only skips directories.
If your dev/ directory has a lot of different project types (e.g., some Python, some Node.js, some Rust), the command line can get very messy. Instead, you can create a text file named exclude-list.txt and list everything you want to skip:
exclude-list.txt
venv/
env/
__pycache__/
.pytest_cache/
.git/
node_modules/
dist/
build/
*.pyc
.DS_Store
Then run the simplified command:
rsync -avz --exclude-from='exclude-list.txt' /path/to/source/dev/ /path/to/destination/backup/
In rsync, the trailing slash on the source directory matters significantly:
dev/ (with slash): Copies the contents of the directory into the destination.
dev (no slash): Copies the directory itself into the destination.
Would you like me to help you write a small bash script that automates this backup and adds a timestamp to each folder?