mirror of
https://github.com/uklans/cache-domains
synced 2025-06-18 15:42:56 +02:00
Update scripts and add combined_output flag (#251)
* Update scripts and add combined_output flag * Add editorconfig to enforcing formatting requirements * Adjust generic references to monolithic
This commit is contained in:
parent
7fbb21e32c
commit
67594ce10c
8
.editorconfig
Normal file
8
.editorconfig
Normal file
@ -0,0 +1,8 @@
|
|||||||
|
root = true
|
||||||
|
|
||||||
|
[*]
|
||||||
|
indent_style = space
|
||||||
|
indent_size = 4
|
||||||
|
trim_trailing_whitespace = true
|
||||||
|
end_of_line = lf
|
||||||
|
insert_final_newline = true
|
8
faq.md
8
faq.md
@ -20,15 +20,15 @@ Many of the maintainers of this repo also contribute to [the lancachenet project
|
|||||||
|
|
||||||
There are several reasons why a particular service / CDN / website might not be on this list. Here are some of the more common ones:
|
There are several reasons why a particular service / CDN / website might not be on this list. Here are some of the more common ones:
|
||||||
|
|
||||||
1. It's not technically possible to cache it. Many popular websites, including video streaming sites and even some games CDN's use SSL Encryption (i.e https URLs) to serve their content. Because the client opens a secure connection directly to the host, there is no way for the network operator to see what they are downloading, nor cache it. Whilst there are several approaches to work around this, such as MITM techniques, they usually rely on control over the client device to affect SSL Certificates - control somebody running a BYOC LAN typically does not have over the devices customers bring.
|
1. It's not technically possible to cache it. Many popular websites, including video streaming sites and even some games CDN's use SSL Encryption (i.e https URLs) to serve their content. Because the client opens a secure connection directly to the host, there is no way for the network operator to see what they are downloading, nor cache it. Whilst there are several approaches to work around this, such as MITM techniques, they usually rely on control over the client device to affect SSL Certificates - control somebody running a BYOC LAN typically does not have over the devices customers bring.
|
||||||
|
|
||||||
- [These issues](https://github.com/uklans/cache-domains/issues?q=is%3Aissue+is%3Aopen+label%3Ahttps-cantfix) contain game CDNs that we would like to include, but cannot for this reason.
|
- [These issues](https://github.com/uklans/cache-domains/issues?q=is%3Aissue+is%3Aopen+label%3Ahttps-cantfix) contain game CDNs that we would like to include, but cannot for this reason.
|
||||||
|
|
||||||
2. It's out of scope for a LAN. We try to keep this list targeted towards people running LANs. Whilst some none game-related CDNs are included for things like windows updates that use internet bandwidth at LANs, we do not go searching for unrelated sites / hostnames.
|
2. It's out of scope for a LAN. We try to keep this list targeted towards people running LANs. Whilst some none game-related CDNs are included for things like windows updates that use internet bandwidth at LANs, we do not go searching for unrelated sites / hostnames.
|
||||||
|
|
||||||
3. It's not a good cache target / it would not get a good hit ratio. Game downloads are a great cache target because they are large, remain the same for every user and are likely to be downloaded multiple times at a LAN. Other hostnames that only serve dynamic or media files, or content that is not likely to be downloaded multiple times is not a good cache target and can waste valuable storage space on your cache server. This can lead to potentially more valuable content being evicted from the cache due to low space.
|
3. It's not a good cache target / it would not get a good hit ratio. Game downloads are a great cache target because they are large, remain the same for every user and are likely to be downloaded multiple times at a LAN. Other hostnames that only serve dynamic or media files, or content that is not likely to be downloaded multiple times is not a good cache target and can waste valuable storage space on your cache server. This can lead to potentially more valuable content being evicted from the cache due to low space.
|
||||||
|
|
||||||
4. We simply don't yet have a tested list of hostnames for it yet. This is the category you can help with - if you have something that doesn't fall into one of the above reasons not to include it, we would love to review your PR. See [the readme](https://github.com/uklans/cache-domains) for instructions on how to add a new CDN.
|
4. We simply don't yet have a tested list of hostnames for it yet. This is the category you can help with - if you have something that doesn't fall into one of the above reasons not to include it, we would love to review your PR. See [the readme](https://github.com/uklans/cache-domains) for instructions on how to add a new CDN.
|
||||||
|
|
||||||
## SNI Proxy / HTTPS
|
## SNI Proxy / HTTPS
|
||||||
|
|
||||||
|
@ -5,36 +5,41 @@
|
|||||||
The respective shell scripts contained within this directory can be utilised to generate application specific compliant
|
The respective shell scripts contained within this directory can be utilised to generate application specific compliant
|
||||||
configuration which can be utilised with:
|
configuration which can be utilised with:
|
||||||
|
|
||||||
* Dnsmasq
|
|
||||||
* Unbound
|
|
||||||
* AdGuard Home
|
* AdGuard Home
|
||||||
|
* BIND9
|
||||||
|
* Dnsmasq/Pi-hole
|
||||||
|
* Squid
|
||||||
|
* Unbound
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
1. Copy `config.example.json` to `config.json`.
|
1. Copy `config.example.json` to `config.json`.
|
||||||
2. Modify `config.json` to include your Cacheserver's IP(s) and the CDNs you plan to cache.
|
2. Modify `config.json` to include your Cacheserver's IP(s) and the CDNs you plan to cache.
|
||||||
The following example assumes a single shared Cacheserver IP:
|
|
||||||
|
The following example assumes a single shared Cacheserver IP:
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
|
"combined_output": false,
|
||||||
"ips": {
|
"ips": {
|
||||||
"generic": ["10.10.10.200"]
|
"monolithic": ["10.10.10.200"]
|
||||||
},
|
},
|
||||||
"cache_domains": {
|
"cache_domains": {
|
||||||
"blizzard": "generic",
|
"blizzard": "monolithic",
|
||||||
"epicgames": "generic",
|
"epicgames": "monolithic",
|
||||||
"nintendo": "generic",
|
"nintendo": "monolithic",
|
||||||
"origin": "generic",
|
"origin": "monolithic",
|
||||||
"riot": "generic",
|
"riot": "monolithic",
|
||||||
"sony": "generic",
|
"sony": "monolithic",
|
||||||
"steam": "generic",
|
"steam": "monolithic",
|
||||||
"uplay": "generic",
|
"uplay": "monolithic",
|
||||||
"wsus": "generic"
|
"wsus": "monolithic"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
3. Run generation script relative to your DNS implementation: `bash create-dnsmasq.sh`.
|
3. Run generation script relative to your DNS implementation: `bash create-dnsmasq.sh`.
|
||||||
4. Copy files from `output/{dnsmasq,unbound}/*` to the respective locations for Dnsmasq/Unbound.
|
4. If `combined_output` is set to `true` this will result in a single output file: `lancache.conf` with all your enabled services (applies to Adguard Home, Dnsmasq or Unbound).
|
||||||
5. Restart Dnsmasq or Unbound.
|
5. Copy files from `output/{adguardhome,dnsmasq,rpz,squid,unbound}/*` to the respective locations for Dnsmasq/Unbound.
|
||||||
|
6. Restart the appropriate service.
|
||||||
|
|
||||||
### Notes for Dnsmasq users
|
### Notes for Dnsmasq users
|
||||||
|
|
||||||
@ -44,5 +49,5 @@ Multi-IP Lancache setups are only supported with Dnsmasq or Pi-hole versions >=
|
|||||||
|
|
||||||
### Notes for AdGuard Home users
|
### Notes for AdGuard Home users
|
||||||
|
|
||||||
1. In the `config.json`, you may want to add an entry for your non-cached DNS upstreams. You can input this in `ip.adguardhome_upstream` as an array.
|
1. Utilising `"combined_output": true` is more convenient.
|
||||||
2. Once you have ran the script, you can point the upstream list to the text file generated. For example: `upstream_dns_file: "/root/cache-domains/scripts/output/adguardhome/cache-domains.txt"`
|
2. Once you have run the script and uploaded the file to the appropriate location, you should navigate to Adguard Home -> Filters -> DNS blocklists -> Add blocklist -> Add a custom list.
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
|
"combined_output": false,
|
||||||
"ips": {
|
"ips": {
|
||||||
"adguardhome_upstream": ["94.140.14.140", "tls://dns.google", "https://dns.google/dns-query"],
|
|
||||||
"steam": ["10.10.3.10", "10.10.3.11"],
|
"steam": ["10.10.3.10", "10.10.3.11"],
|
||||||
"origin": "10.10.3.12",
|
"origin": "10.10.3.12",
|
||||||
"blizzard": "10.10.3.13",
|
"blizzard": "10.10.3.13",
|
||||||
|
55
scripts/create-adguardhome.sh
Normal file → Executable file
55
scripts/create-adguardhome.sh
Normal file → Executable file
@ -6,37 +6,31 @@ path="${basedir}/cache_domains.json"
|
|||||||
export IFS=' '
|
export IFS=' '
|
||||||
|
|
||||||
test=$(which jq);
|
test=$(which jq);
|
||||||
out=$?
|
if [ $? -gt 0 ] ; then
|
||||||
if [ $out -gt 0 ] ; then
|
|
||||||
echo "This script requires jq to be installed."
|
echo "This script requires jq to be installed."
|
||||||
echo "Your package manager should be able to find it"
|
echo "Your package manager should be able to find it"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
cachenamedefault="disabled"
|
cachenamedefault="disabled"
|
||||||
|
combinedoutput=$(jq -r ".combined_output" config.json)
|
||||||
|
|
||||||
while read -r line; do
|
while read line; do
|
||||||
ip=$(jq ".ips[\"${line}\"]" config.json)
|
ip=$(jq ".ips[\"${line}\"]" config.json)
|
||||||
declare "cacheip${line}"="${ip}"
|
declare "cacheip${line}"="${ip}"
|
||||||
done <<< $(jq -r '.ips | to_entries[] | .key' config.json)
|
done <<< $(jq -r '.ips | to_entries[] | .key' config.json)
|
||||||
|
|
||||||
agh_upstreams=$(jq -r ".ips[\"adguardhome_upstream\"] | .[]" config.json)
|
while read line; do
|
||||||
|
|
||||||
while read -r line; do
|
|
||||||
name=$(jq -r ".cache_domains[\"${line}\"]" config.json)
|
name=$(jq -r ".cache_domains[\"${line}\"]" config.json)
|
||||||
declare "cachename${line}"="${name}"
|
declare "cachename$line"="$name"
|
||||||
done <<< $(jq -r '.cache_domains | to_entries[] | .key' config.json)
|
done <<< $(jq -r '.cache_domains | to_entries[] | .key' config.json)
|
||||||
|
|
||||||
rm -rf ${outputdir}
|
rm -rf ${outputdir}
|
||||||
mkdir -p ${outputdir}
|
mkdir -p ${outputdir}
|
||||||
|
while read entry; do
|
||||||
# add upstreams
|
|
||||||
echo "${agh_upstreams}" >> "${outputdir}/cache-domains.txt"
|
|
||||||
|
|
||||||
while read -r entry; do
|
|
||||||
unset cacheip
|
unset cacheip
|
||||||
unset cachename
|
unset cachename
|
||||||
key=$(jq -r ".cache_domains[$entry].name" $path)
|
key=$(jq -r ".cache_domains[$entry].name" ${path})
|
||||||
cachename="cachename${key}"
|
cachename="cachename${key}"
|
||||||
if [ -z "${!cachename}" ]; then
|
if [ -z "${!cachename}" ]; then
|
||||||
cachename="cachenamedefault"
|
cachename="cachenamedefault"
|
||||||
@ -46,32 +40,41 @@ while read -r entry; do
|
|||||||
fi
|
fi
|
||||||
cacheipname="cacheip${!cachename}"
|
cacheipname="cacheip${!cachename}"
|
||||||
cacheip=$(jq -r 'if type == "array" then .[] else . end' <<< ${!cacheipname} | xargs)
|
cacheip=$(jq -r 'if type == "array" then .[] else . end' <<< ${!cacheipname} | xargs)
|
||||||
while read -r fileid; do
|
while read fileid; do
|
||||||
while read -r filename; do
|
while read filename; do
|
||||||
destfilename="cache-domains.txt" #$(echo $filename | sed -e 's/txt/conf/')
|
destfilename=$(echo ${filename} | sed -e 's/txt/conf/')
|
||||||
outputfile=${outputdir}/${destfilename}
|
outputfile=${outputdir}/${destfilename}
|
||||||
touch ${outputfile}
|
touch ${outputfile}
|
||||||
while read -r fileentry; do
|
while read fileentry; do
|
||||||
# Ignore comments, newlines and wildcards
|
# Ignore comments and newlines
|
||||||
if [[ ${fileentry} == \#* ]] || [[ -z ${fileentry} ]]; then
|
if [[ ${fileentry} == \#* ]] || [[ -z ${fileentry} ]]; then
|
||||||
continue
|
continue
|
||||||
fi
|
fi
|
||||||
|
domainprefix="|"
|
||||||
|
if [[ $fileentry =~ ^\*\. ]]; then
|
||||||
|
domainprefix="||"
|
||||||
|
fi
|
||||||
parsed=$(echo ${fileentry} | sed -e "s/^\*\.//")
|
parsed=$(echo ${fileentry} | sed -e "s/^\*\.//")
|
||||||
for i in ${cacheip}; do
|
if grep -q "${domainprefix}${parsed}^\$dnsrewrite" ${outputfile}; then
|
||||||
if grep -qx "\[/${parsed}/\]${i}" "${outputfile}"; then
|
|
||||||
continue
|
continue
|
||||||
fi
|
fi
|
||||||
echo "[/${parsed}/]${i}" >> "${outputfile}"
|
for i in ${cacheip}; do
|
||||||
|
echo "${domainprefix}${parsed}^\$dnsrewrite=${i}" >> ${outputfile}
|
||||||
done
|
done
|
||||||
done <<< $(cat ${basedir}/${filename} | sort);
|
done <<< $(cat ${basedir}/$filename | sort);
|
||||||
done <<< $(jq -r ".cache_domains[${entry}].domain_files[$fileid]" ${path})
|
done <<< $(jq -r ".cache_domains[${entry}].domain_files[${fileid}]" ${path})
|
||||||
done <<< $(jq -r ".cache_domains[${entry}].domain_files | to_entries[] | .key" ${path})
|
done <<< $(jq -r ".cache_domains[${entry}].domain_files | to_entries[] | .key" ${path})
|
||||||
done <<< $(jq -r '.cache_domains | to_entries[] | .key' ${path})
|
done <<< $(jq -r '.cache_domains | to_entries[] | .key' ${path})
|
||||||
|
|
||||||
|
if [[ ${combinedoutput} == "true" ]]; then
|
||||||
|
for file in ${outputdir}/*; do f=${file//${outputdir}\/} && f=${f//.conf} && echo "# ${f^}" >> ${outputdir}/lancache.conf && cat ${file} >> ${outputdir}/lancache.conf && rm ${file}; done
|
||||||
|
fi
|
||||||
|
|
||||||
cat << EOF
|
cat << EOF
|
||||||
Configuration generation completed.
|
Configuration generation completed.
|
||||||
|
|
||||||
Please point the setting upstream_dns_file in AdGuardHome.yaml to the generated file.
|
Please copy the following files:
|
||||||
For example:
|
- ./${outputdir}/*.conf to /opt/adguardhome/work/userfilters/
|
||||||
upstream_dns_file: "/root/cache-domains/scripts/output/adguardhome/cache-domains.txt"
|
- Navigate to Adguard Home -> Filters -> DNS blocklists -> Add blocklist -> Add a custom list
|
||||||
|
- Add list for each service or utilise the combined output for a single list
|
||||||
EOF
|
EOF
|
@ -6,14 +6,14 @@ path="${basedir}/cache_domains.json"
|
|||||||
export IFS=' '
|
export IFS=' '
|
||||||
|
|
||||||
test=$(which jq);
|
test=$(which jq);
|
||||||
out=$?
|
if [ $? -gt 0 ] ; then
|
||||||
if [ $out -gt 0 ] ; then
|
|
||||||
echo "This script requires jq to be installed."
|
echo "This script requires jq to be installed."
|
||||||
echo "Your package manager should be able to find it"
|
echo "Your package manager should be able to find it"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
cachenamedefault="disabled"
|
cachenamedefault="disabled"
|
||||||
|
combinedoutput=$(jq -r ".combined_output" config.json)
|
||||||
|
|
||||||
while read -r line; do
|
while read -r line; do
|
||||||
ip=$(jq ".ips[\"${line}\"]" config.json)
|
ip=$(jq ".ips[\"${line}\"]" config.json)
|
||||||
@ -30,7 +30,7 @@ mkdir -p ${outputdir}
|
|||||||
while read -r entry; do
|
while read -r entry; do
|
||||||
unset cacheip
|
unset cacheip
|
||||||
unset cachename
|
unset cachename
|
||||||
key=$(jq -r ".cache_domains[$entry].name" $path)
|
key=$(jq -r ".cache_domains[${entry}].name" ${path})
|
||||||
cachename="cachename${key}"
|
cachename="cachename${key}"
|
||||||
if [ -z "${!cachename}" ]; then
|
if [ -z "${!cachename}" ]; then
|
||||||
cachename="cachenamedefault"
|
cachename="cachenamedefault"
|
||||||
@ -42,7 +42,7 @@ while read -r entry; do
|
|||||||
cacheip=$(jq -r 'if type == "array" then .[] else . end' <<< ${!cacheipname} | xargs)
|
cacheip=$(jq -r 'if type == "array" then .[] else . end' <<< ${!cacheipname} | xargs)
|
||||||
while read -r fileid; do
|
while read -r fileid; do
|
||||||
while read -r filename; do
|
while read -r filename; do
|
||||||
destfilename=$(echo $filename | sed -e 's/txt/conf/')
|
destfilename=$(echo ${filename} | sed -e 's/txt/conf/')
|
||||||
outputfile=${outputdir}/${destfilename}
|
outputfile=${outputdir}/${destfilename}
|
||||||
touch ${outputfile}
|
touch ${outputfile}
|
||||||
while read -r fileentry; do
|
while read -r fileentry; do
|
||||||
@ -64,6 +64,10 @@ while read -r entry; do
|
|||||||
done <<< $(jq -r ".cache_domains[${entry}].domain_files | to_entries[] | .key" ${path})
|
done <<< $(jq -r ".cache_domains[${entry}].domain_files | to_entries[] | .key" ${path})
|
||||||
done <<< $(jq -r '.cache_domains | to_entries[] | .key' ${path})
|
done <<< $(jq -r '.cache_domains | to_entries[] | .key' ${path})
|
||||||
|
|
||||||
|
if [[ ${combinedoutput} == "true" ]]; then
|
||||||
|
for file in ${outputdir}/*; do f=${file//${outputdir}\/} && f=${f//.conf} && echo "# ${f^}" >> ${outputdir}/lancache.conf && cat ${file} >> ${outputdir}/lancache.conf && rm ${file}; done
|
||||||
|
fi
|
||||||
|
|
||||||
cat << EOF
|
cat << EOF
|
||||||
Configuration generation completed.
|
Configuration generation completed.
|
||||||
|
|
||||||
|
@ -7,8 +7,7 @@ basedomain=${1:-lancache.net}
|
|||||||
export IFS=' '
|
export IFS=' '
|
||||||
|
|
||||||
test=$(which jq);
|
test=$(which jq);
|
||||||
out=$?
|
if [ $? -gt 0 ] ; then
|
||||||
if [ $out -gt 0 ] ; then
|
|
||||||
echo "This script requires jq to be installed."
|
echo "This script requires jq to be installed."
|
||||||
echo "Your package manager should be able to find it"
|
echo "Your package manager should be able to find it"
|
||||||
exit 1
|
exit 1
|
||||||
@ -18,36 +17,36 @@ cachenamedefault="disabled"
|
|||||||
|
|
||||||
while read line; do
|
while read line; do
|
||||||
ip=$(jq ".ips[\"${line}\"]" config.json)
|
ip=$(jq ".ips[\"${line}\"]" config.json)
|
||||||
declare "cacheip$line"="$ip"
|
declare "cacheip${line}"="${ip}"
|
||||||
done <<< $(jq -r '.ips | to_entries[] | .key' config.json)
|
done <<< $(jq -r '.ips | to_entries[] | .key' config.json)
|
||||||
|
|
||||||
while read line; do
|
while read line; do
|
||||||
name=$(jq -r ".cache_domains[\"${line}\"]" config.json)
|
name=$(jq -r ".cache_domains[\"${line}\"]" config.json)
|
||||||
declare "cachename$line"="$name"
|
declare "cachename${line}"="${name}"
|
||||||
done <<< $(jq -r '.cache_domains | to_entries[] | .key' config.json)
|
done <<< $(jq -r '.cache_domains | to_entries[] | .key' config.json)
|
||||||
|
|
||||||
rm -rf ${outputdir}
|
rm -rf ${outputdir}
|
||||||
mkdir -p ${outputdir}
|
mkdir -p ${outputdir}
|
||||||
outputfile=${outputdir}/db.rpz.$basedomain
|
outputfile=${outputdir}/db.rpz.${basedomain}
|
||||||
cat > $outputfile << EOF
|
cat > ${outputfile} << EOF
|
||||||
\$TTL 60 ; default TTL
|
\$TTL 60 ; default TTL
|
||||||
\$ORIGIN rpz.$basedomain.
|
\$ORIGIN rpz.${basedomain}.
|
||||||
@ SOA ns1.$basedomain. admin.$basedomain. (
|
@ SOA ns1.${basedomain}. admin.${basedomain}. (
|
||||||
$(date +%Y%m%d01) ; serial
|
$(date +%Y%m%d01) ; serial
|
||||||
604800 ; refresh (1 week)
|
604800 ; refresh (1 week)
|
||||||
600 ; retry (10 mins)
|
600 ; retry (10 mins)
|
||||||
600 ; expire (10 mins)
|
600 ; expire (10 mins)
|
||||||
600 ; minimum (10 mins)
|
600 ; minimum (10 mins)
|
||||||
)
|
)
|
||||||
NS ns1.$basedomain.
|
NS ns1.${basedomain}.
|
||||||
NS ns2.$basedomain.
|
NS ns2.${basedomain}.
|
||||||
|
|
||||||
EOF
|
EOF
|
||||||
|
|
||||||
while read entry; do
|
while read entry; do
|
||||||
unset cacheip
|
unset cacheip
|
||||||
unset cachename
|
unset cachename
|
||||||
key=$(jq -r ".cache_domains[$entry].name" $path)
|
key=$(jq -r ".cache_domains[${entry}].name" ${path})
|
||||||
cachename="cachename${key}"
|
cachename="cachename${key}"
|
||||||
if [ -z "${!cachename}" ]; then
|
if [ -z "${!cachename}" ]; then
|
||||||
cachename="cachenamedefault"
|
cachename="cachenamedefault"
|
||||||
@ -59,16 +58,16 @@ while read entry; do
|
|||||||
cacheip=$(jq -r 'if type == "array" then .[] else . end' <<< ${!cacheipname} | xargs)
|
cacheip=$(jq -r 'if type == "array" then .[] else . end' <<< ${!cacheipname} | xargs)
|
||||||
while read fileid; do
|
while read fileid; do
|
||||||
while read filename; do
|
while read filename; do
|
||||||
echo "" >> $outputfile
|
echo "" >> ${outputfile}
|
||||||
echo "; $(echo $filename | sed -e 's/.txt$//')" >> $outputfile
|
echo "; $(echo ${filename} | sed -e 's/.txt$//')" >> ${outputfile}
|
||||||
destfilename=$(echo $filename | sed -e 's/txt/conf/')
|
destfilename=$(echo ${filename} | sed -e 's/txt/conf/')
|
||||||
while read fileentry; do
|
while read fileentry; do
|
||||||
# Ignore comments and newlines
|
# Ignore comments and newlines
|
||||||
if [[ $fileentry == \#* ]] || [[ -z $fileentry ]]; then
|
if [[ ${fileentry} == \#* ]] || [[ -z ${fileentry} ]]; then
|
||||||
continue
|
continue
|
||||||
fi
|
fi
|
||||||
parsed=$(echo $fileentry)
|
parsed=$(echo ${fileentry})
|
||||||
if grep -qx "^\"${parsed}\". " $outputfile; then
|
if grep -qx "^\"${parsed}\". " ${outputfile}; then
|
||||||
continue
|
continue
|
||||||
fi
|
fi
|
||||||
t=""
|
t=""
|
||||||
@ -88,27 +87,27 @@ while read entry; do
|
|||||||
"${parsed}" \
|
"${parsed}" \
|
||||||
"${t}" \
|
"${t}" \
|
||||||
"${i}" \
|
"${i}" \
|
||||||
>> $outputfile
|
>> ${outputfile}
|
||||||
done
|
done
|
||||||
done <<< $(cat ${basedir}/$filename | sort);
|
done <<< $(cat ${basedir}/${filename} | sort);
|
||||||
done <<< $(jq -r ".cache_domains[$entry].domain_files[$fileid]" $path)
|
done <<< $(jq -r ".cache_domains[${entry}].domain_files[${fileid}]" ${path})
|
||||||
done <<< $(jq -r ".cache_domains[$entry].domain_files | to_entries[] | .key" $path)
|
done <<< $(jq -r ".cache_domains[${entry}].domain_files | to_entries[] | .key" ${path})
|
||||||
done <<< $(jq -r '.cache_domains | to_entries[] | .key' $path)
|
done <<< $(jq -r '.cache_domains | to_entries[] | .key' ${path})
|
||||||
|
|
||||||
cat << EOF
|
cat << EOF
|
||||||
Configuration generation completed.
|
Configuration generation completed.
|
||||||
|
|
||||||
Please include the rpz zone in your bind configuration"
|
Please include the rpz zone in your bind configuration"
|
||||||
- cp $outputfile /etc/bind
|
- cp ${outputfile} /etc/bind
|
||||||
- configure the zone and use it
|
- configure the zone and use it
|
||||||
|
|
||||||
options {
|
options {
|
||||||
[...]
|
[...]
|
||||||
response-policy {zone "rpz.$basedomain";};
|
response-policy {zone "rpz.${basedomain}";};
|
||||||
[...]
|
[...]
|
||||||
}
|
}
|
||||||
zone "rpz.$basedomain" {
|
zone "rpz.$basedomain" {
|
||||||
type master;
|
type master;
|
||||||
file "/etc/bind/db.rpz.$basedomain";
|
file "/etc/bind/db.rpz.${basedomain}";
|
||||||
};
|
};
|
||||||
EOF
|
EOF
|
||||||
|
@ -7,8 +7,7 @@ REGEX="^\\*\\.(.*)$"
|
|||||||
export IFS=' '
|
export IFS=' '
|
||||||
|
|
||||||
test=$(which jq);
|
test=$(which jq);
|
||||||
out=$?
|
if [ $? -gt 0 ] ; then
|
||||||
if [ $out -gt 0 ] ; then
|
|
||||||
echo "This script requires jq to be installed."
|
echo "This script requires jq to be installed."
|
||||||
echo "Your package manager should be able to find it"
|
echo "Your package manager should be able to find it"
|
||||||
exit 1
|
exit 1
|
||||||
@ -25,7 +24,7 @@ rm -rf ${outputdir}
|
|||||||
mkdir -p ${outputdir}
|
mkdir -p ${outputdir}
|
||||||
while read -r entry; do
|
while read -r entry; do
|
||||||
unset cachename
|
unset cachename
|
||||||
key=$(jq -r ".cache_domains[$entry].name" $path)
|
key=$(jq -r ".cache_domains[$entry].name" ${path})
|
||||||
cachename="cachename${key}"
|
cachename="cachename${key}"
|
||||||
if [ -z "${!cachename}" ]; then
|
if [ -z "${!cachename}" ]; then
|
||||||
cachename="cachenamedefault"
|
cachename="cachenamedefault"
|
||||||
@ -43,7 +42,7 @@ while read -r entry; do
|
|||||||
if [[ ${fileentry} == \#* ]] || [[ -z ${fileentry} ]]; then
|
if [[ ${fileentry} == \#* ]] || [[ -z ${fileentry} ]]; then
|
||||||
continue
|
continue
|
||||||
fi
|
fi
|
||||||
# Handle wildcards to squid wildcards
|
# Ha wildcards to squid wildcards
|
||||||
parsed=$(echo ${fileentry} | sed -e "s/^\*\./\./")
|
parsed=$(echo ${fileentry} | sed -e "s/^\*\./\./")
|
||||||
# If we have cdn.thing and *.cdn.thing in cache_domains
|
# If we have cdn.thing and *.cdn.thing in cache_domains
|
||||||
# Squid requires ONLY cdn.thing
|
# Squid requires ONLY cdn.thing
|
||||||
@ -57,10 +56,9 @@ while read -r entry; do
|
|||||||
continue
|
continue
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
echo "${parsed}" >> "${outputfile}"
|
echo "${parsed}" >> "${outputfile}"
|
||||||
done <<< $(cat ${basedir}/${filename} | sort);
|
done <<< $(cat ${basedir}/${filename} | sort);
|
||||||
done <<< $(jq -r ".cache_domains[${entry}].domain_files[$fileid]" ${path})
|
done <<< $(jq -r ".cache_domains[${entry}].domain_files[${fileid}]" ${path})
|
||||||
done <<< $(jq -r ".cache_domains[${entry}].domain_files | to_entries[] | .key" ${path})
|
done <<< $(jq -r ".cache_domains[${entry}].domain_files | to_entries[] | .key" ${path})
|
||||||
done <<< $(jq -r '.cache_domains | to_entries[] | .key' ${path})
|
done <<< $(jq -r '.cache_domains | to_entries[] | .key' ${path})
|
||||||
|
|
||||||
|
@ -6,23 +6,23 @@ path="${basedir}/cache_domains.json"
|
|||||||
export IFS=' '
|
export IFS=' '
|
||||||
|
|
||||||
test=$(which jq);
|
test=$(which jq);
|
||||||
out=$?
|
if [ $? -gt 0 ] ; then
|
||||||
if [ $out -gt 0 ] ; then
|
|
||||||
echo "This script requires jq to be installed."
|
echo "This script requires jq to be installed."
|
||||||
echo "Your package manager should be able to find it"
|
echo "Your package manager should be able to find it"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
cachenamedefault="disabled"
|
cachenamedefault="disabled"
|
||||||
|
combinedoutput=$(jq -r ".combined_output" config.json)
|
||||||
|
|
||||||
while read line; do
|
while read line; do
|
||||||
ip=$(jq ".ips[\"${line}\"]" config.json)
|
ip=$(jq ".ips[\"${line}\"]" config.json)
|
||||||
declare "cacheip$line"="$ip"
|
declare "cacheip${line}"="${ip}"
|
||||||
done <<< $(jq -r '.ips | to_entries[] | .key' config.json)
|
done <<< $(jq -r '.ips | to_entries[] | .key' config.json)
|
||||||
|
|
||||||
while read line; do
|
while read line; do
|
||||||
name=$(jq -r ".cache_domains[\"${line}\"]" config.json)
|
name=$(jq -r ".cache_domains[\"${line}\"]" config.json)
|
||||||
declare "cachename$line"="$name"
|
declare "cachename${line}"="${name}"
|
||||||
done <<< $(jq -r '.cache_domains | to_entries[] | .key' config.json)
|
done <<< $(jq -r '.cache_domains | to_entries[] | .key' config.json)
|
||||||
|
|
||||||
rm -rf ${outputdir}
|
rm -rf ${outputdir}
|
||||||
@ -30,7 +30,7 @@ mkdir -p ${outputdir}
|
|||||||
while read entry; do
|
while read entry; do
|
||||||
unset cacheip
|
unset cacheip
|
||||||
unset cachename
|
unset cachename
|
||||||
key=$(jq -r ".cache_domains[$entry].name" $path)
|
key=$(jq -r ".cache_domains[${entry}].name" ${path})
|
||||||
cachename="cachename${key}"
|
cachename="cachename${key}"
|
||||||
if [ -z "${!cachename}" ]; then
|
if [ -z "${!cachename}" ]; then
|
||||||
cachename="cachenamedefault"
|
cachename="cachenamedefault"
|
||||||
@ -42,29 +42,33 @@ while read entry; do
|
|||||||
cacheip=$(jq -r 'if type == "array" then .[] else . end' <<< ${!cacheipname} | xargs)
|
cacheip=$(jq -r 'if type == "array" then .[] else . end' <<< ${!cacheipname} | xargs)
|
||||||
while read fileid; do
|
while read fileid; do
|
||||||
while read filename; do
|
while read filename; do
|
||||||
destfilename=$(echo $filename | sed -e 's/txt/conf/')
|
destfilename=$(echo ${filename} | sed -e 's/txt/conf/')
|
||||||
outputfile=${outputdir}/${destfilename}
|
outputfile=${outputdir}/${destfilename}
|
||||||
touch $outputfile
|
touch ${outputfile}
|
||||||
while read fileentry; do
|
while read fileentry; do
|
||||||
# Ignore comments and newlines
|
# Ignore comments and newlines
|
||||||
if [[ $fileentry == \#* ]] || [[ -z $fileentry ]]; then
|
if [[ ${fileentry} == \#* ]] || [[ -z ${fileentry} ]]; then
|
||||||
continue
|
continue
|
||||||
fi
|
fi
|
||||||
parsed=$(echo $fileentry | sed -e "s/^\*\.//")
|
parsed=$(echo ${fileentry} | sed -e "s/^\*\.//")
|
||||||
if grep -qx " local-zone: \"${parsed}\" redirect" $outputfile; then
|
if grep -qx " local-zone: \"${parsed}\" redirect" ${outputfile}; then
|
||||||
continue
|
continue
|
||||||
fi
|
fi
|
||||||
if [[ $(head -n 1 $outputfile) != "server:" ]]; then
|
if [[ $(head -n 1 ${outputfile}) != "server:" ]]; then
|
||||||
echo "server:" >> $outputfile
|
echo "server:" >> ${outputfile}
|
||||||
fi
|
fi
|
||||||
echo " local-zone: \"${parsed}\" redirect" >> $outputfile
|
echo " local-zone: \"${parsed}\" redirect" >> ${outputfile}
|
||||||
for i in ${cacheip}; do
|
for i in ${cacheip}; do
|
||||||
echo " local-data: \"${parsed} 30 IN A ${i}\"" >> $outputfile
|
echo " local-data: \"${parsed} 30 IN A ${i}\"" >> ${outputfile}
|
||||||
done
|
done
|
||||||
done <<< $(cat ${basedir}/$filename | sort);
|
done <<< $(cat ${basedir}/${filename} | sort);
|
||||||
done <<< $(jq -r ".cache_domains[$entry].domain_files[$fileid]" $path)
|
done <<< $(jq -r ".cache_domains[${entry}].domain_files[${fileid}]" ${path})
|
||||||
done <<< $(jq -r ".cache_domains[$entry].domain_files | to_entries[] | .key" $path)
|
done <<< $(jq -r ".cache_domains[${entry}].domain_files | to_entries[] | .key" ${path})
|
||||||
done <<< $(jq -r '.cache_domains | to_entries[] | .key' $path)
|
done <<< $(jq -r '.cache_domains | to_entries[] | .key' ${path})
|
||||||
|
|
||||||
|
if [[ ${combinedoutput} == "true" ]]; then
|
||||||
|
for file in ${outputdir}/*; do f=${file//${outputdir}\/} && f=${f//.conf} && echo "# ${f^}" >> ${outputdir}/lancache.conf && cat ${file} >> ${outputdir}/lancache.conf && rm ${file}; done
|
||||||
|
fi
|
||||||
|
|
||||||
cat << EOF
|
cat << EOF
|
||||||
Configuration generation completed.
|
Configuration generation completed.
|
||||||
|
Loading…
Reference in New Issue
Block a user