File manager - Edit - /usr/local/CyberPanel/lib/python3.10/site-packages/tldextract/__pycache__/tldextract.cpython-310.pyc
Back
o �h+I � @ s, d Z ddlmZ ddlZddlZddlZddlmZm Z ddl mZ ddlm Z ddlZddlZddlmZmZ dd lmZmZmZ dd lmZ e�d�Zej�d�Zd Zedd�G dd� d��ZG dd� d�Z e � Z!G dd� d�Z"e e!j#� d'd(dd��Z$e e!j%�d d!� �Z%G d"d#� d#�Z&d)d%d&�Z'dS )*aZ `tldextract` accurately separates a URL's subdomain, domain, and public suffix. It does this via the Public Suffix List (PSL). >>> import tldextract >>> tldextract.extract('http://forums.news.cnn.com/') ExtractResult(subdomain='forums.news', domain='cnn', suffix='com', is_private=False) >>> tldextract.extract('http://forums.bbc.co.uk/') # United Kingdom ExtractResult(subdomain='forums', domain='bbc', suffix='co.uk', is_private=False) >>> tldextract.extract('http://www.worldbank.org.kg/') # Kyrgyzstan ExtractResult(subdomain='www', domain='worldbank', suffix='org.kg', is_private=False) Note subdomain and suffix are _optional_. Not all URL-like inputs have a subdomain or a valid suffix. >>> tldextract.extract('google.com') ExtractResult(subdomain='', domain='google', suffix='com', is_private=False) >>> tldextract.extract('google.notavalidsuffix') ExtractResult(subdomain='google', domain='notavalidsuffix', suffix='', is_private=False) >>> tldextract.extract('http://127.0.0.1:8080/deployed/') ExtractResult(subdomain='', domain='127.0.0.1', suffix='', is_private=False) To rejoin the original hostname, if it was indeed a valid, registered hostname: >>> ext = tldextract.extract('http://forums.bbc.co.uk') >>> ext.registered_domain 'bbc.co.uk' >>> ext.fqdn 'forums.bbc.co.uk' � )�annotationsN)� Collection�Sequence)� dataclass)�wraps� )� DiskCache� get_cache_dir)�lenient_netloc� looks_like_ip�looks_like_ipv6)�get_suffix_lists� tldextract�TLDEXTRACT_CACHE_TIMEOUT)z4https://publicsuffix.org/list/public_suffix_list.datzQhttps://raw.githubusercontent.com/publicsuffix/list/master/public_suffix_list.datT)�orderc @ sj e Zd ZU dZded<