requests==2.28.1 flask==3.0.2 pydub pydantic nump...
创建于:2025年5月28日
创建于:2025年5月28日
requests==2.28.1
flask==3.0.2
pydub
pydantic
numpy
scipy
tensorboard
librosa==0.9.2
numba
pytorch-lightning
gradio==4.19
gradio_client
ffmpeg-python
onnxruntime
tqdm
funasr==1.0.0
cn2an
pypinyin
pyopenjtalk
g2p_en
torchaudio
modelscope==1.10.0
sentencepiece
transformers
chardet
PyYAML
psutil
jieba_fast
jieba
LangSegment>=0.2.6
Faster_Whisper
flask_httpauth
flask_cors
pip install will give
ERROR: Ignored the following versions that require a different python version: 0.52.0 Requires-Python >=3.6,<3.9; 0.52.0rc3 Requires-Python >=3.6,<3.9; 0.53.0 Requires-Python >=3.6,<3.10; 0.53.0rc1.post1 Requires-Python >=3.6,<3.10; 0.53.0rc2 Requires-Python >=3.6,<3.10; 0.53.0rc3 Requires-Python >=3.6,<3.10; 0.53.1 Requires-Python >=3.6,<3.10; 0.54.0 Requires-Python >=3.7,<3.10; 0.54.0rc2 Requires-Python >=3.7,<3.10; 0.54.0rc3 Requires-Python >=3.7,<3.10; 0.54.1 Requires-Python >=3.7,<3.10; 1.16.0rc1 Requires-Python >=3.11; 1.6.2 Requires-Python >=3.7,<3.10; 1.6.3 Requires-Python >=3.7,<3.10; 1.7.0 Requires-Python >=3.7,<3.10; 1.7.1 Requires-Python >=3.7,<3.10
ERROR: Could not find a version that satisfies the requirement LangSegment>=0.2.6 (from versions: 0.2.0)
ERROR: No matching distribution found for LangSegment>=0.2.6
The error you're seeing is caused by Python version incompatibility with some packages in your requirements list, most notably LangSegment>=0.2.6
.
You're getting messages like:
textIgnored the following versions that require a different python version: ... Could not find a version that satisfies the requirement LangSegment>=0.2.6
This means:
LangSegment>=0.2.6
does not exist on PyPI, at least not for your Python version.According to PyPI:
LangSegment
is 0.2.0.0.2.6
or higher does not exist (as of now).Replace:
textLangSegment>=0.2.6
With:
textLangSegment==0.2.0
This will stop pip from trying to fetch a non-existent version.
Some packages (like older funasr
or LangSegment
) might need:
Try setting up a virtual environment with a compatible Python version:
bash# Using pyenv for example pyenv install 3.8.18 pyenv virtualenv 3.8.18 langenv pyenv activate langenv pip install -r requirements.txt
To catch more of these errors early:
bashpip install pipdeptree pipdeptree --warn silence | grep -i requires-python
requirements.txt
Here’s an edited version of your file that should work under Python 3.8–3.9:
textrequests==2.28.1 flask==3.0.2 pydub pydantic numpy scipy tensorboard librosa==0.9.2 numba pytorch-lightning gradio==4.19 gradio_client ffmpeg-python onnxruntime tqdm funasr==1.0.0 cn2an pypinyin pyopenjtalk g2p_en torchaudio modelscope==1.10.0 sentencepiece transformers chardet PyYAML psutil jieba_fast jieba LangSegment==0.2.0 Faster_Whisper flask_httpauth flask_cors
Let me know your exact Python version if you'd like help tailoring the list more precisely.
File "/home/user/app/GPT_SoVITS/TTS_infer_pack/TextPreprocessor.py", line 10, in <module>
import LangSegment
File "/usr/local/lib/python3.10/site-packages/LangSegment/init.py", line 1, in <module>
from .LangSegment import LangSegment,getTexts,classify,getCounts,printList,setLangfilters,getLangfilters,setfilters,getfilters
ImportError: cannot import name 'setLangfilters' from 'LangSegment.LangSegment' (/usr/local/lib/python3.10/site-packages/LangSegment/LangSegment.py)
Below is a concrete game‑plan to eliminate the ImportError: cannot import name 'setLangfilters'
that stops GPT_SoVITS/TTS_infer_pack/TextPreprocessor.py
on Python 3.10.
LangSegment 0.2.0
is the only version currently on PyPI, but the GPT‑SoVITS v2 code you are running was written for the now‑vanished LangSegment 0.3.5+
, which contained the functions setLangfilters/getLangfilters
. Because those symbols were dropped when the maintainer yanked anything newer than 0.2.0, importing fails on every modern install. You can solve the problem in three main ways: (1) pin and install a community‑archived 0.3.5 wheel; (2) patch GPT‑SoVITS to stop calling the missing functions; or (3) upgrade to GPT‑SoVITS v3+, which removed the LangSegment dependency altogether. Most users choose option 1 because it is quickest; option 3 is the cleanest long‑term; option 2 is a fallback if corporate policy forbids third‑party wheels.
Fact | Evidence |
---|---|
Only LangSegment 0.2.0 is on PyPI today | |
GPT‑SoVITS v2 depends on setLangfilters , present in ≥0.3.5 | |
0.3.5 was yanked; several projects now hit the same ImportError | GitHub GitHub |
A backup of 0.3.5 exists on GitHub and still works | |
0.3.5 can still be installed through a direct wheel or git URL | GitHub |
GPT‑SoVITS v3 migrated away from LangSegment |
Works in minutes; best for getting your current code running right now.
bash# Create an isolated env (Python 3.8 or 3.9 is safest) conda create -n sovits38 python=3.8 -y conda activate sovits38 # Option A – install from the community backup repo pip install git+https://github.com/chameleon-ai/LangSegment-0.3.5-backup.git@main # # Option B – grab the wheel cached by Libraries.io pip install LangSegment==0.3.5 --index-url https://pypi.python.org/simple --extra-index-url=https://huggingface.github.io/mirror/ # wheel still mirrored here
Why Python 3.8/3.9? 0.3.5 was released before Python 3.10; some users have reported minor warnings on 3.10+ but it runs fine in 3.8/3.9 GitHub.
After the install, run:
bashpython -c "import LangSegment, inspect, sys; print(LangSegment.setLangfilters, sys.version)"
If you see a function reference and no traceback, the library is healthy.
If you must stay on 0.2.0, monkey‑patch missing APIs right after the import:
pythonimport LangSegment if not hasattr(LangSegment, "setLangfilters"): def _noop(*args, **kwargs): pass LangSegment.setLangfilters = _noop LangSegment.getLangfilters = _noop
Several projects do exactly this as a hot‑fix GitHub.
Down‑side: you lose the multilingual filter feature, so mixed‑language text may segment less cleanly.
requirements.txt
Pin the rescued version so new teammates don’t break their env:
text# Language segmentation LangSegment==0.3.5 # pinned – 0.2.0 lacks setLangfilters
Add a Python version marker if you keep using 3.8/3.9:
textpython_version < "3.10"
Run pip-compile --generate-hashes
(if you use pip‑tools) so future builds are reproducible.
Symptom | Fix |
---|---|
pip still fetches 0.2.0 | Clear wheel cache pip cache purge ; double‑check the --extra-index-url . |
Wheel won’t install on 3.10+ | Switch to 3.8/3.9 or build from source with python -m pip install --no-binary :all: LangSegment==0.3.5 . |
Function exists but segmentation inaccurate | Use option 3.1; new tokenizer in v3 is more stable across languages . |
Corporate policy forbids unofficial wheels | Apply option 3.2 (monkey‑patch) and document the loss of filtering. |
setLangfilters
support – used above.Try Option 1 in a fresh Python 3.8/3.9 environment; if the ImportError disappears, your TTS pipeline should resume. Let me know how it goes or if you prefer help migrating to GPT‑SoVITS v3.
can I add
pip install git+https://github.com/chameleon-ai/LangSegment-0.3.5-backup.git@main #
to requirement file?
Yes, you can include a GitHub repository directly in your
requirements.txt
file. To do this, use the following format:
textgit+https://github.com/username/repository.git@branch_or_tag#egg=package_name
For your specific case with LangSegment
, you can add:
textgit+https://github.com/chameleon-ai/LangSegment-0.3.5-backup.git@main#egg=LangSegment
Explanation of the components:
git+https://...
: Specifies that pip should install the package from a Git repository over HTTPS.@main
: Indicates the branch or tag to install from; replace main
with the desired branch or tag if different.#egg=LangSegment
: Names the package; this is necessary for pip to identify the package name correctly.Additional Tips:
setup.py
file at its root; pip relies on this to install the package.#egg=package_name
is crucial for pip to process the dependency correctly.By adding this line to your requirements.txt
, pip will handle the installation of LangSegment
directly from the specified GitHub repository.