mirror of
https://github.com/linuxserver/docker-python.git
synced 2026-02-19 17:24:19 +08:00
Compare commits
14 Commits
alpine321
...
alpine322-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c30ce081cb | ||
|
|
95e06d0e1a | ||
|
|
519fd3f727 | ||
|
|
5345705be6 | ||
|
|
63b9627546 | ||
|
|
ec680c00a7 | ||
|
|
7755c5bd74 | ||
|
|
46e874f469 | ||
|
|
dacef7b810 | ||
|
|
998085c533 | ||
|
|
11f91ba1ff | ||
|
|
94fa31d096 | ||
|
|
035d64b057 | ||
|
|
5cede33a44 |
4
.github/CONTRIBUTING.md
vendored
4
.github/CONTRIBUTING.md
vendored
@ -24,7 +24,7 @@
|
||||
## Readme
|
||||
|
||||
If you would like to change our readme, please __**do not**__ directly edit the readme, as it is auto-generated on each commit.
|
||||
Instead edit the [readme-vars.yml](https://github.com/linuxserver/docker-python/edit/alpine321/readme-vars.yml).
|
||||
Instead edit the [readme-vars.yml](https://github.com/linuxserver/docker-python/edit/alpine322/readme-vars.yml).
|
||||
|
||||
These variables are used in a template for our [Jenkins Builder](https://github.com/linuxserver/docker-jenkins-builder) as part of an ansible play.
|
||||
Most of these variables are also carried over to [docs.linuxserver.io](https://docs.linuxserver.io)
|
||||
@ -115,7 +115,7 @@ Once registered you can define the dockerfile to use with `-f Dockerfile.aarch64
|
||||
|
||||
## Update the changelog
|
||||
|
||||
If you are modifying the Dockerfiles or any of the startup scripts in [root](https://github.com/linuxserver/docker-python/tree/alpine321/root), add an entry to the changelog
|
||||
If you are modifying the Dockerfiles or any of the startup scripts in [root](https://github.com/linuxserver/docker-python/tree/alpine322/root), add an entry to the changelog
|
||||
|
||||
```yml
|
||||
changelogs:
|
||||
|
||||
2
.github/PULL_REQUEST_TEMPLATE.md
vendored
2
.github/PULL_REQUEST_TEMPLATE.md
vendored
@ -21,7 +21,7 @@
|
||||
|
||||
------------------------------
|
||||
|
||||
- [ ] I have read the [contributing](https://github.com/linuxserver/docker-python/blob/alpine321/.github/CONTRIBUTING.md) guideline and understand that I have made the correct modifications
|
||||
- [ ] I have read the [contributing](https://github.com/linuxserver/docker-python/blob/alpine322/.github/CONTRIBUTING.md) guideline and understand that I have made the correct modifications
|
||||
|
||||
------------------------------
|
||||
|
||||
|
||||
30
.github/workflows/external_trigger.yml
vendored
30
.github/workflows/external_trigger.yml
vendored
@ -7,31 +7,31 @@ permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
external-trigger-alpine321:
|
||||
external-trigger-alpine322:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4.1.1
|
||||
|
||||
- name: External Trigger
|
||||
if: github.ref == 'refs/heads/alpine321'
|
||||
if: github.ref == 'refs/heads/alpine322'
|
||||
env:
|
||||
SKIP_EXTERNAL_TRIGGER: ${{ vars.SKIP_EXTERNAL_TRIGGER }}
|
||||
run: |
|
||||
printf "# External trigger for docker-python\n\n" >> $GITHUB_STEP_SUMMARY
|
||||
if grep -q "^python_alpine321_" <<< "${SKIP_EXTERNAL_TRIGGER}"; then
|
||||
if grep -q "^python_alpine322_" <<< "${SKIP_EXTERNAL_TRIGGER}"; then
|
||||
echo "> [!NOTE]" >> $GITHUB_STEP_SUMMARY
|
||||
echo "> Github organizational variable \`SKIP_EXTERNAL_TRIGGER\` contains \`python_alpine321_\`; will skip trigger if version matches." >> $GITHUB_STEP_SUMMARY
|
||||
elif grep -q "^python_alpine321" <<< "${SKIP_EXTERNAL_TRIGGER}"; then
|
||||
echo "> Github organizational variable \`SKIP_EXTERNAL_TRIGGER\` contains \`python_alpine322_\`; will skip trigger if version matches." >> $GITHUB_STEP_SUMMARY
|
||||
elif grep -q "^python_alpine322" <<< "${SKIP_EXTERNAL_TRIGGER}"; then
|
||||
echo "> [!WARNING]" >> $GITHUB_STEP_SUMMARY
|
||||
echo "> Github organizational variable \`SKIP_EXTERNAL_TRIGGER\` contains \`python_alpine321\`; skipping trigger." >> $GITHUB_STEP_SUMMARY
|
||||
echo "> Github organizational variable \`SKIP_EXTERNAL_TRIGGER\` contains \`python_alpine322\`; skipping trigger." >> $GITHUB_STEP_SUMMARY
|
||||
exit 0
|
||||
fi
|
||||
echo "> [!NOTE]" >> $GITHUB_STEP_SUMMARY
|
||||
echo "> External trigger running off of alpine321 branch. To disable this trigger, add \`python_alpine321\` into the Github organizational variable \`SKIP_EXTERNAL_TRIGGER\`." >> $GITHUB_STEP_SUMMARY
|
||||
echo "> External trigger running off of alpine322 branch. To disable this trigger, add \`python_alpine322\` into the Github organizational variable \`SKIP_EXTERNAL_TRIGGER\`." >> $GITHUB_STEP_SUMMARY
|
||||
printf "\n## Retrieving external version\n\n" >> $GITHUB_STEP_SUMMARY
|
||||
EXT_RELEASE=$(curl -u ${{ secrets.CR_USER }}:${{ secrets.CR_PAT }} -sX GET https://api.github.com/repos/python/cpython/tags | jq -r '.[] | select(.name | contains("rc") or contains("a") or contains("b") | not) | .name' | sed 's|^v||g' | sort -rV | head -1)
|
||||
echo "Type is \`custom_version_command\`" >> $GITHUB_STEP_SUMMARY
|
||||
if grep -q "^python_alpine321_${EXT_RELEASE}" <<< "${SKIP_EXTERNAL_TRIGGER}"; then
|
||||
if grep -q "^python_alpine322_${EXT_RELEASE}" <<< "${SKIP_EXTERNAL_TRIGGER}"; then
|
||||
echo "> [!WARNING]" >> $GITHUB_STEP_SUMMARY
|
||||
echo "> Github organizational variable \`SKIP_EXTERNAL_TRIGGER\` matches current external release; skipping trigger." >> $GITHUB_STEP_SUMMARY
|
||||
exit 0
|
||||
@ -39,7 +39,7 @@ jobs:
|
||||
if [ -z "${EXT_RELEASE}" ] || [ "${EXT_RELEASE}" == "null" ]; then
|
||||
echo "> [!WARNING]" >> $GITHUB_STEP_SUMMARY
|
||||
echo "> Can't retrieve external version, exiting" >> $GITHUB_STEP_SUMMARY
|
||||
FAILURE_REASON="Can't retrieve external version for python branch alpine321"
|
||||
FAILURE_REASON="Can't retrieve external version for python branch alpine322"
|
||||
GHA_TRIGGER_URL="https://github.com/linuxserver/docker-python/actions/runs/${{ github.run_id }}"
|
||||
curl -X POST -H "Content-Type: application/json" --data '{"avatar_url": "https://cdn.discordapp.com/avatars/354986384542662657/df91181b3f1cf0ef1592fbe18e0962d7.png","embeds": [{"color": 16711680,
|
||||
"description": "**Trigger Failed** \n**Reason:** '"${FAILURE_REASON}"' \n**Trigger URL:** '"${GHA_TRIGGER_URL}"' \n"}],
|
||||
@ -50,7 +50,7 @@ jobs:
|
||||
echo "Sanitized external version: \`${EXT_RELEASE_SANITIZED}\`" >> $GITHUB_STEP_SUMMARY
|
||||
echo "Retrieving last pushed version" >> $GITHUB_STEP_SUMMARY
|
||||
image="linuxserver/python"
|
||||
tag="alpine321"
|
||||
tag="alpine322"
|
||||
token=$(curl -sX GET \
|
||||
"https://ghcr.io/token?scope=repository%3Alinuxserver%2Fpython%3Apull" \
|
||||
| jq -r '.token')
|
||||
@ -96,7 +96,7 @@ jobs:
|
||||
if [ -z "${IMAGE_VERSION}" ]; then
|
||||
echo "> [!WARNING]" >> $GITHUB_STEP_SUMMARY
|
||||
echo "Can't retrieve last pushed version, exiting" >> $GITHUB_STEP_SUMMARY
|
||||
FAILURE_REASON="Can't retrieve last pushed version for python tag alpine321"
|
||||
FAILURE_REASON="Can't retrieve last pushed version for python tag alpine322"
|
||||
curl -X POST -H "Content-Type: application/json" --data '{"avatar_url": "https://cdn.discordapp.com/avatars/354986384542662657/df91181b3f1cf0ef1592fbe18e0962d7.png","embeds": [{"color": 16711680,
|
||||
"description": "**Trigger Failed** \n**Reason:** '"${FAILURE_REASON}"' \n"}],
|
||||
"username": "Github Actions"}' ${{ secrets.DISCORD_WEBHOOK }}
|
||||
@ -106,14 +106,14 @@ jobs:
|
||||
if [ "${EXT_RELEASE_SANITIZED}" == "${IMAGE_VERSION}" ]; then
|
||||
echo "Sanitized version \`${EXT_RELEASE_SANITIZED}\` already pushed, exiting" >> $GITHUB_STEP_SUMMARY
|
||||
exit 0
|
||||
elif [ $(curl -s https://ci.linuxserver.io/job/Docker-Pipeline-Builders/job/docker-python/job/alpine321/lastBuild/api/json | jq -r '.building') == "true" ]; then
|
||||
elif [ $(curl -s https://ci.linuxserver.io/job/Docker-Pipeline-Builders/job/docker-python/job/alpine322/lastBuild/api/json | jq -r '.building') == "true" ]; then
|
||||
echo "New version \`${EXT_RELEASE}\` found; but there already seems to be an active build on Jenkins; exiting" >> $GITHUB_STEP_SUMMARY
|
||||
exit 0
|
||||
else
|
||||
if [[ "${artifacts_found}" == "false" ]]; then
|
||||
echo "> [!WARNING]" >> $GITHUB_STEP_SUMMARY
|
||||
echo "> New version detected, but not all artifacts are published yet; skipping trigger" >> $GITHUB_STEP_SUMMARY
|
||||
FAILURE_REASON="New version ${EXT_RELEASE} for python tag alpine321 is detected, however not all artifacts are uploaded to upstream release yet. Will try again later."
|
||||
FAILURE_REASON="New version ${EXT_RELEASE} for python tag alpine322 is detected, however not all artifacts are uploaded to upstream release yet. Will try again later."
|
||||
curl -X POST -H "Content-Type: application/json" --data '{"avatar_url": "https://cdn.discordapp.com/avatars/354986384542662657/df91181b3f1cf0ef1592fbe18e0962d7.png","embeds": [{"color": 9802903,
|
||||
"description": "**Trigger Failed** \n**Reason:** '"${FAILURE_REASON}"' \n"}],
|
||||
"username": "Github Actions"}' ${{ secrets.DISCORD_WEBHOOK }}
|
||||
@ -124,7 +124,7 @@ jobs:
|
||||
echo "All artifacts seem to be uploaded." >> $GITHUB_STEP_SUMMARY
|
||||
fi
|
||||
response=$(curl -iX POST \
|
||||
https://ci.linuxserver.io/job/Docker-Pipeline-Builders/job/docker-python/job/alpine321/buildWithParameters?PACKAGE_CHECK=false \
|
||||
https://ci.linuxserver.io/job/Docker-Pipeline-Builders/job/docker-python/job/alpine322/buildWithParameters?PACKAGE_CHECK=false \
|
||||
--user ${{ secrets.JENKINS_USER }}:${{ secrets.JENKINS_TOKEN }} | grep -i location | sed "s|^[L|l]ocation: \(.*\)|\1|")
|
||||
echo "Jenkins [job queue url](${response%$'\r'})" >> $GITHUB_STEP_SUMMARY
|
||||
echo "Sleeping 10 seconds until job starts" >> $GITHUB_STEP_SUMMARY
|
||||
@ -139,7 +139,7 @@ jobs:
|
||||
--data-urlencode "description=GHA external trigger https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}" \
|
||||
--data-urlencode "Submit=Submit"
|
||||
echo "**** Notifying Discord ****"
|
||||
TRIGGER_REASON="A version change was detected for python tag alpine321. Old version:${IMAGE_VERSION} New version:${EXT_RELEASE_SANITIZED}"
|
||||
TRIGGER_REASON="A version change was detected for python tag alpine322. Old version:${IMAGE_VERSION} New version:${EXT_RELEASE_SANITIZED}"
|
||||
curl -X POST -H "Content-Type: application/json" --data '{"avatar_url": "https://cdn.discordapp.com/avatars/354986384542662657/df91181b3f1cf0ef1592fbe18e0962d7.png","embeds": [{"color": 9802903,
|
||||
"description": "**Build Triggered** \n**Reason:** '"${TRIGGER_REASON}"' \n**Build URL:** '"${buildurl}display/redirect"' \n"}],
|
||||
"username": "Github Actions"}' ${{ secrets.DISCORD_WEBHOOK }}
|
||||
|
||||
2
.github/workflows/greetings.yml
vendored
2
.github/workflows/greetings.yml
vendored
@ -15,5 +15,5 @@ jobs:
|
||||
- uses: actions/first-interaction@v1
|
||||
with:
|
||||
issue-message: 'Thanks for opening your first issue here! Be sure to follow the relevant issue templates, or risk having this issue marked as invalid.'
|
||||
pr-message: 'Thanks for opening this pull request! Be sure to follow the [pull request template](https://github.com/linuxserver/docker-python/blob/alpine321/.github/PULL_REQUEST_TEMPLATE.md)!'
|
||||
pr-message: 'Thanks for opening this pull request! Be sure to follow the [pull request template](https://github.com/linuxserver/docker-python/blob/alpine322/.github/PULL_REQUEST_TEMPLATE.md)!'
|
||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
# syntax=docker/dockerfile:1
|
||||
|
||||
FROM ghcr.io/linuxserver/baseimage-alpine:3.21 AS buildstage
|
||||
FROM ghcr.io/linuxserver/baseimage-alpine:3.22 AS buildstage
|
||||
|
||||
# set version label
|
||||
ARG PYTHON_VERSION
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
# syntax=docker/dockerfile:1
|
||||
|
||||
FROM ghcr.io/linuxserver/baseimage-alpine:arm64v8-3.21 AS buildstage
|
||||
FROM ghcr.io/linuxserver/baseimage-alpine:arm64v8-3.22 AS buildstage
|
||||
|
||||
# set version label
|
||||
ARG PYTHON_VERSION
|
||||
|
||||
92
Jenkinsfile
vendored
92
Jenkinsfile
vendored
@ -76,7 +76,7 @@ pipeline {
|
||||
script{
|
||||
env.EXIT_STATUS = ''
|
||||
env.LS_RELEASE = sh(
|
||||
script: '''docker run --rm quay.io/skopeo/stable:v1 inspect docker://ghcr.io/${LS_USER}/${CONTAINER_NAME}:alpine321 2>/dev/null | jq -r '.Labels.build_version' | awk '{print $3}' | grep '\\-ls' || : ''',
|
||||
script: '''docker run --rm quay.io/skopeo/stable:v1 inspect docker://ghcr.io/${LS_USER}/${CONTAINER_NAME}:alpine322 2>/dev/null | jq -r '.Labels.build_version' | awk '{print $3}' | grep '\\-ls' || : ''',
|
||||
returnStdout: true).trim()
|
||||
env.LS_RELEASE_NOTES = sh(
|
||||
script: '''cat readme-vars.yml | awk -F \\" '/date: "[0-9][0-9].[0-9][0-9].[0-9][0-9]:/ {print $4;exit;}' | sed -E ':a;N;$!ba;s/\\r{0,1}\\n/\\\\n/g' ''',
|
||||
@ -109,7 +109,7 @@ pipeline {
|
||||
script{
|
||||
env.LS_TAG_NUMBER = sh(
|
||||
script: '''#! /bin/bash
|
||||
tagsha=$(git rev-list -n 1 alpine321-${LS_RELEASE} 2>/dev/null)
|
||||
tagsha=$(git rev-list -n 1 alpine322-${LS_RELEASE} 2>/dev/null)
|
||||
if [ "${tagsha}" == "${COMMIT_SHA}" ]; then
|
||||
echo ${LS_RELEASE_NUMBER}
|
||||
elif [ -z "${GIT_COMMIT}" ]; then
|
||||
@ -187,10 +187,10 @@ pipeline {
|
||||
}
|
||||
}
|
||||
}
|
||||
// If this is a alpine321 build use live docker endpoints
|
||||
// If this is a alpine322 build use live docker endpoints
|
||||
stage("Set ENV live build"){
|
||||
when {
|
||||
branch "alpine321"
|
||||
branch "alpine322"
|
||||
environment name: 'CHANGE_ID', value: ''
|
||||
}
|
||||
steps {
|
||||
@ -200,13 +200,13 @@ pipeline {
|
||||
env.GITLABIMAGE = 'registry.gitlab.com/linuxserver.io/' + env.LS_REPO + '/' + env.CONTAINER_NAME
|
||||
env.QUAYIMAGE = 'quay.io/linuxserver.io/' + env.CONTAINER_NAME
|
||||
if (env.MULTIARCH == 'true') {
|
||||
env.CI_TAGS = 'amd64-alpine321-' + env.EXT_RELEASE_CLEAN + '-ls' + env.LS_TAG_NUMBER + '|arm64v8-alpine321-' + env.EXT_RELEASE_CLEAN + '-ls' + env.LS_TAG_NUMBER
|
||||
env.CI_TAGS = 'amd64-alpine322-' + env.EXT_RELEASE_CLEAN + '-ls' + env.LS_TAG_NUMBER + '|arm64v8-alpine322-' + env.EXT_RELEASE_CLEAN + '-ls' + env.LS_TAG_NUMBER
|
||||
} else {
|
||||
env.CI_TAGS = 'alpine321-' + env.EXT_RELEASE_CLEAN + '-ls' + env.LS_TAG_NUMBER
|
||||
env.CI_TAGS = 'alpine322-' + env.EXT_RELEASE_CLEAN + '-ls' + env.LS_TAG_NUMBER
|
||||
}
|
||||
env.VERSION_TAG = env.EXT_RELEASE_CLEAN + '-ls' + env.LS_TAG_NUMBER
|
||||
env.META_TAG = 'alpine321-' + env.EXT_RELEASE_CLEAN + '-ls' + env.LS_TAG_NUMBER
|
||||
env.EXT_RELEASE_TAG = 'alpine321-version-' + env.EXT_RELEASE_CLEAN
|
||||
env.META_TAG = 'alpine322-' + env.EXT_RELEASE_CLEAN + '-ls' + env.LS_TAG_NUMBER
|
||||
env.EXT_RELEASE_TAG = 'alpine322-version-' + env.EXT_RELEASE_CLEAN
|
||||
env.BUILDCACHE = 'docker.io/lsiodev/buildcache,registry.gitlab.com/linuxserver.io/docker-jenkins-builder/lsiodev-buildcache,ghcr.io/linuxserver/lsiodev-buildcache,quay.io/linuxserver.io/lsiodev-buildcache'
|
||||
env.CITEST_IMAGETAG = 'latest'
|
||||
}
|
||||
@ -215,7 +215,7 @@ pipeline {
|
||||
// If this is a dev build use dev docker endpoints
|
||||
stage("Set ENV dev build"){
|
||||
when {
|
||||
not {branch "alpine321"}
|
||||
not {branch "alpine322"}
|
||||
environment name: 'CHANGE_ID', value: ''
|
||||
}
|
||||
steps {
|
||||
@ -225,13 +225,13 @@ pipeline {
|
||||
env.GITLABIMAGE = 'registry.gitlab.com/linuxserver.io/' + env.LS_REPO + '/lsiodev-' + env.CONTAINER_NAME
|
||||
env.QUAYIMAGE = 'quay.io/linuxserver.io/lsiodev-' + env.CONTAINER_NAME
|
||||
if (env.MULTIARCH == 'true') {
|
||||
env.CI_TAGS = 'amd64-alpine321-' + env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA + '|arm64v8-alpine321-' + env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA
|
||||
env.CI_TAGS = 'amd64-alpine322-' + env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA + '|arm64v8-alpine322-' + env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA
|
||||
} else {
|
||||
env.CI_TAGS = 'alpine321-' + env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA
|
||||
env.CI_TAGS = 'alpine322-' + env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA
|
||||
}
|
||||
env.VERSION_TAG = env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA
|
||||
env.META_TAG = 'alpine321-' + env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA
|
||||
env.EXT_RELEASE_TAG = 'alpine321-version-' + env.EXT_RELEASE_CLEAN
|
||||
env.META_TAG = 'alpine322-' + env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA
|
||||
env.EXT_RELEASE_TAG = 'alpine322-version-' + env.EXT_RELEASE_CLEAN
|
||||
env.DOCKERHUB_LINK = 'https://hub.docker.com/r/' + env.DEV_DOCKERHUB_IMAGE + '/tags/'
|
||||
env.BUILDCACHE = 'docker.io/lsiodev/buildcache,registry.gitlab.com/linuxserver.io/docker-jenkins-builder/lsiodev-buildcache,ghcr.io/linuxserver/lsiodev-buildcache,quay.io/linuxserver.io/lsiodev-buildcache'
|
||||
env.CITEST_IMAGETAG = 'develop'
|
||||
@ -250,13 +250,13 @@ pipeline {
|
||||
env.GITLABIMAGE = 'registry.gitlab.com/linuxserver.io/' + env.LS_REPO + '/lspipepr-' + env.CONTAINER_NAME
|
||||
env.QUAYIMAGE = 'quay.io/linuxserver.io/lspipepr-' + env.CONTAINER_NAME
|
||||
if (env.MULTIARCH == 'true') {
|
||||
env.CI_TAGS = 'amd64-alpine321-' + env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA + '-pr-' + env.PULL_REQUEST + '|arm64v8-alpine321-' + env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA + '-pr-' + env.PULL_REQUEST
|
||||
env.CI_TAGS = 'amd64-alpine322-' + env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA + '-pr-' + env.PULL_REQUEST + '|arm64v8-alpine322-' + env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA + '-pr-' + env.PULL_REQUEST
|
||||
} else {
|
||||
env.CI_TAGS = 'alpine321-' + env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA + '-pr-' + env.PULL_REQUEST
|
||||
env.CI_TAGS = 'alpine322-' + env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA + '-pr-' + env.PULL_REQUEST
|
||||
}
|
||||
env.VERSION_TAG = env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA + '-pr-' + env.PULL_REQUEST
|
||||
env.META_TAG = 'alpine321-' + env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA + '-pr-' + env.PULL_REQUEST
|
||||
env.EXT_RELEASE_TAG = 'alpine321-version-' + env.EXT_RELEASE_CLEAN
|
||||
env.META_TAG = 'alpine322-' + env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA + '-pr-' + env.PULL_REQUEST
|
||||
env.EXT_RELEASE_TAG = 'alpine322-version-' + env.EXT_RELEASE_CLEAN
|
||||
env.CODE_URL = 'https://github.com/' + env.LS_USER + '/' + env.LS_REPO + '/pull/' + env.PULL_REQUEST
|
||||
env.DOCKERHUB_LINK = 'https://hub.docker.com/r/' + env.PR_DOCKERHUB_IMAGE + '/tags/'
|
||||
env.BUILDCACHE = 'docker.io/lsiodev/buildcache,registry.gitlab.com/linuxserver.io/docker-jenkins-builder/lsiodev-buildcache,ghcr.io/linuxserver/lsiodev-buildcache,quay.io/linuxserver.io/lsiodev-buildcache'
|
||||
@ -295,7 +295,7 @@ pipeline {
|
||||
// Use helper containers to render templated files
|
||||
stage('Update-Templates') {
|
||||
when {
|
||||
branch "alpine321"
|
||||
branch "alpine322"
|
||||
environment name: 'CHANGE_ID', value: ''
|
||||
expression {
|
||||
env.CONTAINER_NAME != null
|
||||
@ -307,24 +307,24 @@ pipeline {
|
||||
TEMPDIR=$(mktemp -d)
|
||||
docker pull ghcr.io/linuxserver/jenkins-builder:latest
|
||||
# Cloned repo paths for templating:
|
||||
# ${TEMPDIR}/docker-${CONTAINER_NAME}: Cloned branch alpine321 of ${LS_USER}/${LS_REPO} for running the jenkins builder on
|
||||
# ${TEMPDIR}/repo/${LS_REPO}: Cloned branch alpine321 of ${LS_USER}/${LS_REPO} for commiting various templated file changes and pushing back to Github
|
||||
# ${TEMPDIR}/docker-${CONTAINER_NAME}: Cloned branch alpine322 of ${LS_USER}/${LS_REPO} for running the jenkins builder on
|
||||
# ${TEMPDIR}/repo/${LS_REPO}: Cloned branch alpine322 of ${LS_USER}/${LS_REPO} for commiting various templated file changes and pushing back to Github
|
||||
# ${TEMPDIR}/docs/docker-documentation: Cloned docs repo for pushing docs updates to Github
|
||||
# ${TEMPDIR}/unraid/docker-templates: Cloned docker-templates repo to check for logos
|
||||
# ${TEMPDIR}/unraid/templates: Cloned templates repo for commiting unraid template changes and pushing back to Github
|
||||
git clone --branch alpine321 --depth 1 https://github.com/${LS_USER}/${LS_REPO}.git ${TEMPDIR}/docker-${CONTAINER_NAME}
|
||||
git clone --branch alpine322 --depth 1 https://github.com/${LS_USER}/${LS_REPO}.git ${TEMPDIR}/docker-${CONTAINER_NAME}
|
||||
docker run --rm -v ${TEMPDIR}/docker-${CONTAINER_NAME}:/tmp -e LOCAL=true -e PUID=$(id -u) -e PGID=$(id -g) ghcr.io/linuxserver/jenkins-builder:latest
|
||||
echo "Starting Stage 1 - Jenkinsfile update"
|
||||
if [[ "$(md5sum Jenkinsfile | awk '{ print $1 }')" != "$(md5sum ${TEMPDIR}/docker-${CONTAINER_NAME}/Jenkinsfile | awk '{ print $1 }')" ]]; then
|
||||
mkdir -p ${TEMPDIR}/repo
|
||||
git clone https://github.com/${LS_USER}/${LS_REPO}.git ${TEMPDIR}/repo/${LS_REPO}
|
||||
cd ${TEMPDIR}/repo/${LS_REPO}
|
||||
git checkout -f alpine321
|
||||
git checkout -f alpine322
|
||||
cp ${TEMPDIR}/docker-${CONTAINER_NAME}/Jenkinsfile ${TEMPDIR}/repo/${LS_REPO}/
|
||||
git add Jenkinsfile
|
||||
git commit -m 'Bot Updating Templated Files'
|
||||
git pull https://LinuxServer-CI:${GITHUB_TOKEN}@github.com/${LS_USER}/${LS_REPO}.git alpine321
|
||||
git push https://LinuxServer-CI:${GITHUB_TOKEN}@github.com/${LS_USER}/${LS_REPO}.git alpine321
|
||||
git pull https://LinuxServer-CI:${GITHUB_TOKEN}@github.com/${LS_USER}/${LS_REPO}.git alpine322
|
||||
git push https://LinuxServer-CI:${GITHUB_TOKEN}@github.com/${LS_USER}/${LS_REPO}.git alpine322
|
||||
echo "true" > /tmp/${COMMIT_SHA}-${BUILD_NUMBER}
|
||||
echo "Updating Jenkinsfile and exiting build, new one will trigger based on commit"
|
||||
rm -Rf ${TEMPDIR}
|
||||
@ -343,13 +343,13 @@ pipeline {
|
||||
mkdir -p ${TEMPDIR}/repo
|
||||
git clone https://github.com/${LS_USER}/${LS_REPO}.git ${TEMPDIR}/repo/${LS_REPO}
|
||||
cd ${TEMPDIR}/repo/${LS_REPO}
|
||||
git checkout -f alpine321
|
||||
git checkout -f alpine322
|
||||
for i in ${TEMPLATES_TO_DELETE}; do
|
||||
git rm "${i}"
|
||||
done
|
||||
git commit -m 'Bot Updating Templated Files'
|
||||
git pull https://LinuxServer-CI:${GITHUB_TOKEN}@github.com/${LS_USER}/${LS_REPO}.git alpine321
|
||||
git push https://LinuxServer-CI:${GITHUB_TOKEN}@github.com/${LS_USER}/${LS_REPO}.git alpine321
|
||||
git pull https://LinuxServer-CI:${GITHUB_TOKEN}@github.com/${LS_USER}/${LS_REPO}.git alpine322
|
||||
git push https://LinuxServer-CI:${GITHUB_TOKEN}@github.com/${LS_USER}/${LS_REPO}.git alpine322
|
||||
echo "true" > /tmp/${COMMIT_SHA}-${BUILD_NUMBER}
|
||||
echo "Deleting old/deprecated templates and exiting build, new one will trigger based on commit"
|
||||
rm -Rf ${TEMPDIR}
|
||||
@ -365,7 +365,7 @@ pipeline {
|
||||
mkdir -p ${TEMPDIR}/repo
|
||||
git clone https://github.com/${LS_USER}/${LS_REPO}.git ${TEMPDIR}/repo/${LS_REPO}
|
||||
cd ${TEMPDIR}/repo/${LS_REPO}
|
||||
git checkout -f alpine321
|
||||
git checkout -f alpine322
|
||||
cd ${TEMPDIR}/docker-${CONTAINER_NAME}
|
||||
mkdir -p ${TEMPDIR}/repo/${LS_REPO}/.github/workflows
|
||||
mkdir -p ${TEMPDIR}/repo/${LS_REPO}/.github/ISSUE_TEMPLATE
|
||||
@ -378,8 +378,8 @@ pipeline {
|
||||
fi
|
||||
git add readme-vars.yml ${TEMPLATED_FILES}
|
||||
git commit -m 'Bot Updating Templated Files'
|
||||
git pull https://LinuxServer-CI:${GITHUB_TOKEN}@github.com/${LS_USER}/${LS_REPO}.git alpine321
|
||||
git push https://LinuxServer-CI:${GITHUB_TOKEN}@github.com/${LS_USER}/${LS_REPO}.git alpine321
|
||||
git pull https://LinuxServer-CI:${GITHUB_TOKEN}@github.com/${LS_USER}/${LS_REPO}.git alpine322
|
||||
git push https://LinuxServer-CI:${GITHUB_TOKEN}@github.com/${LS_USER}/${LS_REPO}.git alpine322
|
||||
echo "true" > /tmp/${COMMIT_SHA}-${BUILD_NUMBER}
|
||||
echo "Updating templates and exiting build, new one will trigger based on commit"
|
||||
rm -Rf ${TEMPDIR}
|
||||
@ -446,7 +446,7 @@ pipeline {
|
||||
// Exit the build if the Templated files were just updated
|
||||
stage('Template-exit') {
|
||||
when {
|
||||
branch "alpine321"
|
||||
branch "alpine322"
|
||||
environment name: 'CHANGE_ID', value: ''
|
||||
environment name: 'FILES_UPDATED', value: 'true'
|
||||
expression {
|
||||
@ -459,10 +459,10 @@ pipeline {
|
||||
}
|
||||
}
|
||||
}
|
||||
// If this is a alpine321 build check the S6 service file perms
|
||||
// If this is a alpine322 build check the S6 service file perms
|
||||
stage("Check S6 Service file Permissions"){
|
||||
when {
|
||||
branch "alpine321"
|
||||
branch "alpine322"
|
||||
environment name: 'CHANGE_ID', value: ''
|
||||
environment name: 'EXIT_STATUS', value: ''
|
||||
}
|
||||
@ -751,7 +751,7 @@ pipeline {
|
||||
-e DOCKER_LOGS_TIMEOUT=\"${CI_DELAY}\" \
|
||||
-e TAGS=\"${CI_TAGS}\" \
|
||||
-e META_TAG=\"${META_TAG}\" \
|
||||
-e RELEASE_TAG=\"alpine321\" \
|
||||
-e RELEASE_TAG=\"alpine322\" \
|
||||
-e PORT=\"${CI_PORT}\" \
|
||||
-e SSL=\"${CI_SSL}\" \
|
||||
-e BASE=\"${DIST_IMAGE}\" \
|
||||
@ -791,7 +791,7 @@ pipeline {
|
||||
CACHEIMAGE=${i}
|
||||
fi
|
||||
done
|
||||
docker buildx imagetools create --prefer-index=false -t ${PUSHIMAGE}:${META_TAG} -t ${PUSHIMAGE}:alpine321 -t ${PUSHIMAGE}:${EXT_RELEASE_TAG} ${CACHEIMAGE}:amd64-${COMMIT_SHA}-${BUILD_NUMBER} || \
|
||||
docker buildx imagetools create --prefer-index=false -t ${PUSHIMAGE}:${META_TAG} -t ${PUSHIMAGE}:alpine322 -t ${PUSHIMAGE}:${EXT_RELEASE_TAG} ${CACHEIMAGE}:amd64-${COMMIT_SHA}-${BUILD_NUMBER} || \
|
||||
{ if [[ "${PUSHIMAGE}" != "${QUAYIMAGE}" ]]; then exit 1; fi; }
|
||||
if [ -n "${SEMVER}" ]; then
|
||||
docker buildx imagetools create --prefer-index=false -t ${PUSHIMAGE}:${SEMVER} ${CACHEIMAGE}:amd64-${COMMIT_SHA}-${BUILD_NUMBER} || \
|
||||
@ -820,9 +820,9 @@ pipeline {
|
||||
CACHEIMAGE=${i}
|
||||
fi
|
||||
done
|
||||
docker buildx imagetools create --prefer-index=false -t ${MANIFESTIMAGE}:amd64-${META_TAG} -t ${MANIFESTIMAGE}:amd64-alpine321 -t ${MANIFESTIMAGE}:amd64-${EXT_RELEASE_TAG} ${CACHEIMAGE}:amd64-${COMMIT_SHA}-${BUILD_NUMBER} || \
|
||||
docker buildx imagetools create --prefer-index=false -t ${MANIFESTIMAGE}:amd64-${META_TAG} -t ${MANIFESTIMAGE}:amd64-alpine322 -t ${MANIFESTIMAGE}:amd64-${EXT_RELEASE_TAG} ${CACHEIMAGE}:amd64-${COMMIT_SHA}-${BUILD_NUMBER} || \
|
||||
{ if [[ "${MANIFESTIMAGE}" != "${QUAYIMAGE}" ]]; then exit 1; fi; }
|
||||
docker buildx imagetools create --prefer-index=false -t ${MANIFESTIMAGE}:arm64v8-${META_TAG} -t ${MANIFESTIMAGE}:arm64v8-alpine321 -t ${MANIFESTIMAGE}:arm64v8-${EXT_RELEASE_TAG} ${CACHEIMAGE}:arm64v8-${COMMIT_SHA}-${BUILD_NUMBER} || \
|
||||
docker buildx imagetools create --prefer-index=false -t ${MANIFESTIMAGE}:arm64v8-${META_TAG} -t ${MANIFESTIMAGE}:arm64v8-alpine322 -t ${MANIFESTIMAGE}:arm64v8-${EXT_RELEASE_TAG} ${CACHEIMAGE}:arm64v8-${COMMIT_SHA}-${BUILD_NUMBER} || \
|
||||
{ if [[ "${MANIFESTIMAGE}" != "${QUAYIMAGE}" ]]; then exit 1; fi; }
|
||||
if [ -n "${SEMVER}" ]; then
|
||||
docker buildx imagetools create --prefer-index=false -t ${MANIFESTIMAGE}:amd64-${SEMVER} ${CACHEIMAGE}:amd64-${COMMIT_SHA}-${BUILD_NUMBER} || \
|
||||
@ -832,7 +832,7 @@ pipeline {
|
||||
fi
|
||||
done
|
||||
for MANIFESTIMAGE in "${IMAGE}" "${GITLABIMAGE}" "${GITHUBIMAGE}" "${QUAYIMAGE}"; do
|
||||
docker buildx imagetools create -t ${MANIFESTIMAGE}:alpine321 ${MANIFESTIMAGE}:amd64-alpine321 ${MANIFESTIMAGE}:arm64v8-alpine321 || \
|
||||
docker buildx imagetools create -t ${MANIFESTIMAGE}:alpine322 ${MANIFESTIMAGE}:amd64-alpine322 ${MANIFESTIMAGE}:arm64v8-alpine322 || \
|
||||
{ if [[ "${MANIFESTIMAGE}" != "${QUAYIMAGE}" ]]; then exit 1; fi; }
|
||||
docker buildx imagetools create -t ${MANIFESTIMAGE}:${META_TAG} ${MANIFESTIMAGE}:amd64-${META_TAG} ${MANIFESTIMAGE}:arm64v8-${META_TAG} || \
|
||||
{ if [[ "${MANIFESTIMAGE}" != "${QUAYIMAGE}" ]]; then exit 1; fi; }
|
||||
@ -850,7 +850,7 @@ pipeline {
|
||||
// If this is a public release tag it in the LS Github
|
||||
stage('Github-Tag-Push-Release') {
|
||||
when {
|
||||
branch "alpine321"
|
||||
branch "alpine322"
|
||||
expression {
|
||||
env.LS_RELEASE != env.EXT_RELEASE_CLEAN + '-ls' + env.LS_TAG_NUMBER
|
||||
}
|
||||
@ -866,21 +866,21 @@ pipeline {
|
||||
else
|
||||
AUTO_RELEASE_NOTES=$(curl -fsL -H "Authorization: token ${GITHUB_TOKEN}" -H "Accept: application/vnd.github+json" -X POST https://api.github.com/repos/${LS_USER}/${LS_REPO}/releases/generate-notes \
|
||||
-d '{"tag_name":"'${META_TAG}'",\
|
||||
"target_commitish": "alpine321"}' \
|
||||
"target_commitish": "alpine322"}' \
|
||||
| jq -r '.body' | sed 's|## What.s Changed||')
|
||||
fi
|
||||
echo "Pushing New tag for current commit ${META_TAG}"
|
||||
curl -H "Authorization: token ${GITHUB_TOKEN}" -X POST https://api.github.com/repos/${LS_USER}/${LS_REPO}/git/tags \
|
||||
-d '{"tag":"'${META_TAG}'",\
|
||||
"object": "'${COMMIT_SHA}'",\
|
||||
"message": "Tagging Release '${EXT_RELEASE_CLEAN}'-ls'${LS_TAG_NUMBER}' to alpine321",\
|
||||
"message": "Tagging Release '${EXT_RELEASE_CLEAN}'-ls'${LS_TAG_NUMBER}' to alpine322",\
|
||||
"type": "commit",\
|
||||
"tagger": {"name": "LinuxServer-CI","email": "ci@linuxserver.io","date": "'${GITHUB_DATE}'"}}'
|
||||
echo "Pushing New release for Tag"
|
||||
echo "Updating to ${EXT_RELEASE_CLEAN}" > releasebody.json
|
||||
jq -n \
|
||||
--arg tag_name "$META_TAG" \
|
||||
--arg target_commitish "alpine321" \
|
||||
--arg target_commitish "alpine322" \
|
||||
--arg ci_url "${CI_URL:-N/A}" \
|
||||
--arg ls_notes "$AUTO_RELEASE_NOTES" \
|
||||
--arg remote_notes "$(cat releasebody.json)" \
|
||||
@ -890,7 +890,7 @@ pipeline {
|
||||
"name": $tag_name,
|
||||
"body": ("**CI Report:**\\n\\n" + $ci_url + "\\n\\n**LinuxServer Changes:**\\n\\n" + $ls_notes + "\\n\\n**Remote Changes:**\\n\\n" + $remote_notes),
|
||||
"draft": false,
|
||||
"prerelease": true }' > releasebody.json.done
|
||||
"prerelease": false }' > releasebody.json.done
|
||||
curl -H "Authorization: token ${GITHUB_TOKEN}" -X POST https://api.github.com/repos/${LS_USER}/${LS_REPO}/releases -d @releasebody.json.done
|
||||
'''
|
||||
}
|
||||
@ -898,14 +898,14 @@ pipeline {
|
||||
// Add protection to the release branch
|
||||
stage('Github-Release-Branch-Protection') {
|
||||
when {
|
||||
branch "alpine321"
|
||||
branch "alpine322"
|
||||
environment name: 'CHANGE_ID', value: ''
|
||||
environment name: 'EXIT_STATUS', value: ''
|
||||
}
|
||||
steps {
|
||||
echo "Setting up protection for release branch alpine321"
|
||||
echo "Setting up protection for release branch alpine322"
|
||||
sh '''#! /bin/bash
|
||||
curl -H "Authorization: token ${GITHUB_TOKEN}" -X PUT https://api.github.com/repos/${LS_USER}/${LS_REPO}/branches/alpine321/protection \
|
||||
curl -H "Authorization: token ${GITHUB_TOKEN}" -X PUT https://api.github.com/repos/${LS_USER}/${LS_REPO}/branches/alpine322/protection \
|
||||
-d $(jq -c . << EOF
|
||||
{
|
||||
"required_status_checks": null,
|
||||
|
||||
@ -1,5 +1,5 @@
|
||||
<!-- DO NOT EDIT THIS FILE MANUALLY -->
|
||||
<!-- Please read https://github.com/linuxserver/docker-python/blob/alpine321/.github/CONTRIBUTING.md -->
|
||||
<!-- Please read https://github.com/linuxserver/docker-python/blob/alpine322/.github/CONTRIBUTING.md -->
|
||||
[](https://linuxserver.io)
|
||||
|
||||
[](https://blog.linuxserver.io "all the things you can do with our containers including How-To guides, opinions and much more!")
|
||||
@ -44,6 +44,7 @@ This image only contains the compiled python files for Alpine, and is meant to b
|
||||
|
||||
## Versions
|
||||
|
||||
* **11.06.25:** - Release `alpine322` tag.
|
||||
* **05.12.24:** - Release `alpine321` tag.
|
||||
* **07.06.24:** - Release `alpine320` tag.
|
||||
* **07.03.24:** - Initial release.
|
||||
|
||||
@ -4,9 +4,9 @@
|
||||
project_name: docker-python
|
||||
external_type: na
|
||||
custom_version_command: "curl -sX GET https://api.github.com/repos/python/cpython/tags | jq -r '.[] | select(.name | contains(\"rc\") or contains(\"a\") or contains(\"b\") | not) | .name' | sed 's|^v||g' | sort -rV | head -1"
|
||||
release_type: prerelease
|
||||
release_tag: alpine321
|
||||
ls_branch: alpine321
|
||||
release_type: stable
|
||||
release_tag: alpine322
|
||||
ls_branch: alpine322
|
||||
skip_package_check: true
|
||||
unraid_template_sync: false
|
||||
unraid_template: false
|
||||
|
||||
@ -1,17 +0,0 @@
|
||||
Author: Dave Jones <dave.jones@canonical.com>
|
||||
Description: Use aligned access for _sha3 module on ARM.
|
||||
--- a/Modules/_sha3/sha3module.c
|
||||
+++ b/Modules/_sha3/sha3module.c
|
||||
@@ -64,6 +64,12 @@
|
||||
#define PLATFORM_BYTE_ORDER IS_BIG_ENDIAN
|
||||
#endif
|
||||
|
||||
+/* Bus error on 32-bit ARM due to un-aligned memory accesses; 64-bit ARM
|
||||
+ * doesn't complain but un-aligned memory accesses are sub-optimal */
|
||||
+#if defined(__arm__) || defined(__aarch64__)
|
||||
+#define NO_MISALIGNED_ACCESSES
|
||||
+#endif
|
||||
+
|
||||
/* mangle names */
|
||||
#define KeccakF1600_FastLoop_Absorb _PySHA3_KeccakF1600_FastLoop_Absorb
|
||||
#define Keccak_HashFinal _PySHA3_Keccak_HashFinal
|
||||
@ -1,16 +0,0 @@
|
||||
diff --git a/Lib/asyncio/unix_events.py b/Lib/asyncio/unix_events.py
|
||||
index f34a5b4b44..b1d0f1e61e 100644
|
||||
--- a/Lib/asyncio/unix_events.py
|
||||
+++ b/Lib/asyncio/unix_events.py
|
||||
@@ -369,6 +369,11 @@ class _UnixSelectorEventLoop(selector_events.BaseSelectorEventLoop):
|
||||
fut.set_result(total_sent)
|
||||
return
|
||||
|
||||
+ # On 32-bit architectures truncate to 1GiB to avoid OverflowError,
|
||||
+ # see bpo-38319.
|
||||
+ if sys.maxsize < 2 ** 32:
|
||||
+ blocksize = min(blocksize, 2 ** 30)
|
||||
+
|
||||
try:
|
||||
sent = os.sendfile(fd, fileno, offset, blocksize)
|
||||
except (BlockingIOError, InterruptedError):
|
||||
@ -1,112 +0,0 @@
|
||||
From b7dc795dfd175c0d25a479cfaf94a13c368a5a7b Mon Sep 17 00:00:00 2001
|
||||
From: "J. Nick Koston" <nick@koston.org>
|
||||
Date: Sat, 22 Jul 2023 16:07:40 -0500
|
||||
Subject: [PATCH] gh-106527: asyncio: optimize to add/remove readers and
|
||||
writers (#106528)
|
||||
|
||||
---
|
||||
Lib/asyncio/selector_events.py | 64 +++++++++----------
|
||||
Lib/test/test_asyncio/test_selector_events.py | 36 +++++------
|
||||
2 files changed, 47 insertions(+), 53 deletions(-)
|
||||
|
||||
diff --git a/Lib/asyncio/selector_events.py b/Lib/asyncio/selector_events.py
|
||||
index f895750e3c..d521b4e2e2 100644
|
||||
--- a/Lib/asyncio/selector_events.py
|
||||
+++ b/Lib/asyncio/selector_events.py
|
||||
@@ -274,9 +274,8 @@ def _ensure_fd_no_transport(self, fd):
|
||||
def _add_reader(self, fd, callback, *args):
|
||||
self._check_closed()
|
||||
handle = events.Handle(callback, args, self, None)
|
||||
- try:
|
||||
- key = self._selector.get_key(fd)
|
||||
- except KeyError:
|
||||
+ key = self._selector.get_map().get(fd)
|
||||
+ if key is None:
|
||||
self._selector.register(fd, selectors.EVENT_READ,
|
||||
(handle, None))
|
||||
else:
|
||||
@@ -290,30 +289,27 @@ def _add_reader(self, fd, callback, *args):
|
||||
def _remove_reader(self, fd):
|
||||
if self.is_closed():
|
||||
return False
|
||||
- try:
|
||||
- key = self._selector.get_key(fd)
|
||||
- except KeyError:
|
||||
+ key = self._selector.get_map().get(fd)
|
||||
+ if key is None:
|
||||
return False
|
||||
+ mask, (reader, writer) = key.events, key.data
|
||||
+ mask &= ~selectors.EVENT_READ
|
||||
+ if not mask:
|
||||
+ self._selector.unregister(fd)
|
||||
else:
|
||||
- mask, (reader, writer) = key.events, key.data
|
||||
- mask &= ~selectors.EVENT_READ
|
||||
- if not mask:
|
||||
- self._selector.unregister(fd)
|
||||
- else:
|
||||
- self._selector.modify(fd, mask, (None, writer))
|
||||
+ self._selector.modify(fd, mask, (None, writer))
|
||||
|
||||
- if reader is not None:
|
||||
- reader.cancel()
|
||||
- return True
|
||||
- else:
|
||||
- return False
|
||||
+ if reader is not None:
|
||||
+ reader.cancel()
|
||||
+ return True
|
||||
+ else:
|
||||
+ return False
|
||||
|
||||
def _add_writer(self, fd, callback, *args):
|
||||
self._check_closed()
|
||||
handle = events.Handle(callback, args, self, None)
|
||||
- try:
|
||||
- key = self._selector.get_key(fd)
|
||||
- except KeyError:
|
||||
+ key = self._selector.get_map().get(fd)
|
||||
+ if key is None:
|
||||
self._selector.register(fd, selectors.EVENT_WRITE,
|
||||
(None, handle))
|
||||
else:
|
||||
@@ -328,24 +324,22 @@ def _remove_writer(self, fd):
|
||||
"""Remove a writer callback."""
|
||||
if self.is_closed():
|
||||
return False
|
||||
- try:
|
||||
- key = self._selector.get_key(fd)
|
||||
- except KeyError:
|
||||
+ key = self._selector.get_map().get(fd)
|
||||
+ if key is None:
|
||||
return False
|
||||
+ mask, (reader, writer) = key.events, key.data
|
||||
+ # Remove both writer and connector.
|
||||
+ mask &= ~selectors.EVENT_WRITE
|
||||
+ if not mask:
|
||||
+ self._selector.unregister(fd)
|
||||
else:
|
||||
- mask, (reader, writer) = key.events, key.data
|
||||
- # Remove both writer and connector.
|
||||
- mask &= ~selectors.EVENT_WRITE
|
||||
- if not mask:
|
||||
- self._selector.unregister(fd)
|
||||
- else:
|
||||
- self._selector.modify(fd, mask, (reader, None))
|
||||
+ self._selector.modify(fd, mask, (reader, None))
|
||||
|
||||
- if writer is not None:
|
||||
- writer.cancel()
|
||||
- return True
|
||||
- else:
|
||||
- return False
|
||||
+ if writer is not None:
|
||||
+ writer.cancel()
|
||||
+ return True
|
||||
+ else:
|
||||
+ return False
|
||||
|
||||
def add_reader(self, fd, callback, *args):
|
||||
"""Add a reader callback."""
|
||||
--
|
||||
2.39.2 (Apple Git-143)
|
||||
@ -1,72 +0,0 @@
|
||||
From aeef8591e41b68341af308e56a744396c66879cc Mon Sep 17 00:00:00 2001
|
||||
From: "J. Nick Koston" <nick@koston.org>
|
||||
Date: Fri, 14 Jul 2023 08:46:30 -1000
|
||||
Subject: [PATCH] gh-106554: replace `_BaseSelectorImpl._key_from_fd` with
|
||||
`dict.get` (#106555)
|
||||
|
||||
---
|
||||
Lib/selectors.py | 21 ++++-----------------
|
||||
1 file changed, 4 insertions(+), 17 deletions(-)
|
||||
|
||||
diff --git a/Lib/selectors.py b/Lib/selectors.py
|
||||
index dfcc125dcd..6d82935445 100644
|
||||
--- a/Lib/selectors.py
|
||||
+++ b/Lib/selectors.py
|
||||
@@ -276,19 +276,6 @@ def close(self):
|
||||
def get_map(self):
|
||||
return self._map
|
||||
|
||||
- def _key_from_fd(self, fd):
|
||||
- """Return the key associated to a given file descriptor.
|
||||
-
|
||||
- Parameters:
|
||||
- fd -- file descriptor
|
||||
-
|
||||
- Returns:
|
||||
- corresponding key, or None if not found
|
||||
- """
|
||||
- try:
|
||||
- return self._fd_to_key[fd]
|
||||
- except KeyError:
|
||||
- return None
|
||||
|
||||
|
||||
class SelectSelector(_BaseSelectorImpl):
|
||||
@@ -336,7 +323,7 @@ def select(self, timeout=None):
|
||||
if fd in w:
|
||||
events |= EVENT_WRITE
|
||||
|
||||
- key = self._key_from_fd(fd)
|
||||
+ key = self._fd_to_key.get(fd)
|
||||
if key:
|
||||
ready.append((key, events & key.events))
|
||||
return ready
|
||||
@@ -426,7 +413,7 @@ def select(self, timeout=None):
|
||||
if event & ~self._EVENT_WRITE:
|
||||
events |= EVENT_READ
|
||||
|
||||
- key = self._key_from_fd(fd)
|
||||
+ key = self._fd_to_key.get(fd)
|
||||
if key:
|
||||
ready.append((key, events & key.events))
|
||||
return ready
|
||||
@@ -479,7 +466,7 @@ def select(self, timeout=None):
|
||||
if event & ~select.EPOLLOUT:
|
||||
events |= EVENT_READ
|
||||
|
||||
- key = self._key_from_fd(fd)
|
||||
+ key = self._fd_to_key.get(fd)
|
||||
if key:
|
||||
ready.append((key, events & key.events))
|
||||
return ready
|
||||
@@ -574,7 +561,7 @@ def select(self, timeout=None):
|
||||
if flag == select.KQ_FILTER_WRITE:
|
||||
events |= EVENT_WRITE
|
||||
|
||||
- key = self._key_from_fd(fd)
|
||||
+ key = self._fd_to_key.get(fd)
|
||||
if key:
|
||||
ready.append((key, events & key.events))
|
||||
return ready
|
||||
--
|
||||
2.39.2 (Apple Git-143)
|
||||
@ -1,42 +0,0 @@
|
||||
From 8d2f3c36caf9ecdee1176314b18388aef6e7f2c2 Mon Sep 17 00:00:00 2001
|
||||
From: "J. Nick Koston" <nick@koston.org>
|
||||
Date: Thu, 13 Jul 2023 09:18:53 -1000
|
||||
Subject: [PATCH] gh-106664: selectors: add get() method to _SelectorMapping
|
||||
(#106665)
|
||||
|
||||
It can be used to avoid raising and catching KeyError twice via __getitem__.
|
||||
|
||||
Co-authored-by: Inada Naoki <songofacandy@gmail.com>
|
||||
---
|
||||
Lib/selectors.py | 14 +++++++++-----
|
||||
Lib/test/test_selectors.py | 6 ++++++
|
||||
2 files changed, 15 insertions(+), 5 deletions(-)
|
||||
|
||||
diff --git a/Lib/selectors.py b/Lib/selectors.py
|
||||
index af6a4f94b5..dfcc125dcd 100644
|
||||
--- a/Lib/selectors.py
|
||||
+++ b/Lib/selectors.py
|
||||
@@ -66,12 +66,16 @@ def __init__(self, selector):
|
||||
def __len__(self):
|
||||
return len(self._selector._fd_to_key)
|
||||
|
||||
+ def get(self, fileobj, default=None):
|
||||
+ fd = self._selector._fileobj_lookup(fileobj)
|
||||
+ return self._selector._fd_to_key.get(fd, default)
|
||||
+
|
||||
def __getitem__(self, fileobj):
|
||||
- try:
|
||||
- fd = self._selector._fileobj_lookup(fileobj)
|
||||
- return self._selector._fd_to_key[fd]
|
||||
- except KeyError:
|
||||
- raise KeyError("{!r} is not registered".format(fileobj)) from None
|
||||
+ fd = self._selector._fileobj_lookup(fileobj)
|
||||
+ key = self._selector._fd_to_key.get(fd)
|
||||
+ if key is None:
|
||||
+ raise KeyError("{!r} is not registered".format(fileobj))
|
||||
+ return key
|
||||
|
||||
def __iter__(self):
|
||||
return iter(self._selector._fd_to_key)
|
||||
--
|
||||
2.39.2 (Apple Git-143)
|
||||
@ -1,56 +0,0 @@
|
||||
From aecf6aca515a203a823a87c711f15cbb82097c8b Mon Sep 17 00:00:00 2001
|
||||
From: "J. Nick Koston" <nick@koston.org>
|
||||
Date: Tue, 18 Jul 2023 00:16:32 -1000
|
||||
Subject: [PATCH] gh-106751: selectors: optimize EpollSelector.select()
|
||||
(#106754)
|
||||
|
||||
Co-authored-by: Pieter Eendebak <pieter.eendebak@gmail.com>
|
||||
---
|
||||
Lib/selectors.py | 17 +++++++++--------
|
||||
1 file changed, 9 insertions(+), 8 deletions(-)
|
||||
|
||||
diff --git a/Lib/selectors.py b/Lib/selectors.py
|
||||
index 6d82935445..a42d156340 100644
|
||||
--- a/Lib/selectors.py
|
||||
+++ b/Lib/selectors.py
|
||||
@@ -430,6 +430,9 @@ class PollSelector(_PollLikeSelector):
|
||||
|
||||
if hasattr(select, 'epoll'):
|
||||
|
||||
+ _NOT_EPOLLIN = ~select.EPOLLIN
|
||||
+ _NOT_EPOLLOUT = ~select.EPOLLOUT
|
||||
+
|
||||
class EpollSelector(_PollLikeSelector):
|
||||
"""Epoll-based selector."""
|
||||
_selector_cls = select.epoll
|
||||
@@ -452,22 +455,20 @@ def select(self, timeout=None):
|
||||
# epoll_wait() expects `maxevents` to be greater than zero;
|
||||
# we want to make sure that `select()` can be called when no
|
||||
# FD is registered.
|
||||
- max_ev = max(len(self._fd_to_key), 1)
|
||||
+ max_ev = len(self._fd_to_key) or 1
|
||||
|
||||
ready = []
|
||||
try:
|
||||
fd_event_list = self._selector.poll(timeout, max_ev)
|
||||
except InterruptedError:
|
||||
return ready
|
||||
- for fd, event in fd_event_list:
|
||||
- events = 0
|
||||
- if event & ~select.EPOLLIN:
|
||||
- events |= EVENT_WRITE
|
||||
- if event & ~select.EPOLLOUT:
|
||||
- events |= EVENT_READ
|
||||
|
||||
- key = self._fd_to_key.get(fd)
|
||||
+ fd_to_key = self._fd_to_key
|
||||
+ for fd, event in fd_event_list:
|
||||
+ key = fd_to_key.get(fd)
|
||||
if key:
|
||||
+ events = ((event & _NOT_EPOLLIN and EVENT_WRITE)
|
||||
+ | (event & _NOT_EPOLLOUT and EVENT_READ))
|
||||
ready.append((key, events & key.events))
|
||||
return ready
|
||||
|
||||
--
|
||||
2.39.2 (Apple Git-143)
|
||||
@ -1,30 +0,0 @@
|
||||
From 86a112f61954dede0a777544e59068c0b01c8439 Mon Sep 17 00:00:00 2001
|
||||
From: "J. Nick Koston" <nick@koston.org>
|
||||
Date: Wed, 11 Oct 2023 08:34:01 -1000
|
||||
Subject: [PATCH] gh-110733: Optimize _run_once for many iterations of the
|
||||
event loop
|
||||
|
||||
---
|
||||
Lib/asyncio/base_events.py | 7 +++++--
|
||||
1 file changed, 5 insertions(+), 2 deletions(-)
|
||||
|
||||
diff --git a/Lib/asyncio/base_events.py b/Lib/asyncio/base_events.py
|
||||
index b092c93436..956864e424 100644
|
||||
--- a/Lib/asyncio/base_events.py
|
||||
+++ b/Lib/asyncio/base_events.py
|
||||
@@ -1907,8 +1907,11 @@ def _run_once(self):
|
||||
timeout = 0
|
||||
elif self._scheduled:
|
||||
# Compute the desired timeout.
|
||||
- when = self._scheduled[0]._when
|
||||
- timeout = min(max(0, when - self.time()), MAXIMUM_SELECT_TIMEOUT)
|
||||
+ timeout = self._scheduled[0]._when - self.time()
|
||||
+ if timeout > MAXIMUM_SELECT_TIMEOUT:
|
||||
+ timeout = MAXIMUM_SELECT_TIMEOUT
|
||||
+ elif timeout < 0:
|
||||
+ timeout = 0
|
||||
|
||||
event_list = self._selector.select(timeout)
|
||||
self._process_events(event_list)
|
||||
--
|
||||
2.39.3 (Apple Git-145)
|
||||
@ -1,38 +0,0 @@
|
||||
From 7e2d93f30b157e414924c32232bb748c8f66c828 Mon Sep 17 00:00:00 2001
|
||||
From: "J. Nick Koston" <nick@koston.org>
|
||||
Date: Tue, 12 Dec 2023 14:29:21 -1000
|
||||
Subject: [PATCH] gh-112989: asyncio: Reduce overhead to connect sockets with
|
||||
SelectorEventLoop (#112991)
|
||||
|
||||
_ensure_fd_no_transport had a KeyError in the success path
|
||||
---
|
||||
Lib/asyncio/selector_events.py | 14 +++++---------
|
||||
.../2023-12-12-05-48-17.gh-issue-112989.ZAa_eq.rst | 1 +
|
||||
2 files changed, 6 insertions(+), 9 deletions(-)
|
||||
create mode 100644 Misc/NEWS.d/next/Library/2023-12-12-05-48-17.gh-issue-112989.ZAa_eq.rst
|
||||
|
||||
diff --git a/Lib/asyncio/selector_events.py b/Lib/asyncio/selector_events.py
|
||||
index d521b4e2e2..dcd5e0aa34 100644
|
||||
--- a/Lib/asyncio/selector_events.py
|
||||
+++ b/Lib/asyncio/selector_events.py
|
||||
@@ -261,15 +261,11 @@ def _ensure_fd_no_transport(self, fd):
|
||||
except (AttributeError, TypeError, ValueError):
|
||||
# This code matches selectors._fileobj_to_fd function.
|
||||
raise ValueError(f"Invalid file object: {fd!r}") from None
|
||||
- try:
|
||||
- transport = self._transports[fileno]
|
||||
- except KeyError:
|
||||
- pass
|
||||
- else:
|
||||
- if not transport.is_closing():
|
||||
- raise RuntimeError(
|
||||
- f'File descriptor {fd!r} is used by transport '
|
||||
- f'{transport!r}')
|
||||
+ transport = self._transports.get(fileno)
|
||||
+ if transport and not transport.is_closing():
|
||||
+ raise RuntimeError(
|
||||
+ f'File descriptor {fd!r} is used by transport '
|
||||
+ f'{transport!r}')
|
||||
|
||||
def _add_reader(self, fd, callback, *args):
|
||||
self._check_closed()
|
||||
@ -0,0 +1,88 @@
|
||||
From 53da1e8c8ccbe3161ebc42e8b8b7ebd1ab70e05b Mon Sep 17 00:00:00 2001
|
||||
From: "J. Nick Koston" <nick@koston.org>
|
||||
Date: Sun, 18 May 2025 11:56:20 -0400
|
||||
Subject: [PATCH] gh-134173: optimize state transfer between
|
||||
`concurrent.futures.Future` and `asyncio.Future` (#134174)
|
||||
|
||||
Co-authored-by: Kumar Aditya <kumaraditya@python.org>
|
||||
---
|
||||
Lib/asyncio/futures.py | 17 +++---
|
||||
Lib/concurrent/futures/_base.py | 27 +++++++++
|
||||
Lib/test/test_asyncio/test_futures.py | 58 +++++++++++++++++--
|
||||
.../test_concurrent_futures/test_future.py | 57 ++++++++++++++++++
|
||||
...-05-18-07-25-15.gh-issue-134173.53oOoF.rst | 3 +
|
||||
5 files changed, 148 insertions(+), 14 deletions(-)
|
||||
create mode 100644 Misc/NEWS.d/next/Library/2025-05-18-07-25-15.gh-issue-134173.53oOoF.rst
|
||||
|
||||
diff --git a/Lib/asyncio/futures.py b/Lib/asyncio/futures.py
|
||||
index d1df6707302..6bd00a64478 100644
|
||||
--- a/Lib/asyncio/futures.py
|
||||
+++ b/Lib/asyncio/futures.py
|
||||
@@ -351,22 +351,19 @@ def _set_concurrent_future_state(concurrent, source):
|
||||
def _copy_future_state(source, dest):
|
||||
"""Internal helper to copy state from another Future.
|
||||
|
||||
- The other Future may be a concurrent.futures.Future.
|
||||
+ The other Future must be a concurrent.futures.Future.
|
||||
"""
|
||||
- assert source.done()
|
||||
if dest.cancelled():
|
||||
return
|
||||
assert not dest.done()
|
||||
- if source.cancelled():
|
||||
+ done, cancelled, result, exception = source._get_snapshot()
|
||||
+ assert done
|
||||
+ if cancelled:
|
||||
dest.cancel()
|
||||
+ elif exception is not None:
|
||||
+ dest.set_exception(_convert_future_exc(exception))
|
||||
else:
|
||||
- exception = source.exception()
|
||||
- if exception is not None:
|
||||
- dest.set_exception(_convert_future_exc(exception))
|
||||
- else:
|
||||
- result = source.result()
|
||||
- dest.set_result(result)
|
||||
-
|
||||
+ dest.set_result(result)
|
||||
|
||||
def _chain_future(source, destination):
|
||||
"""Chain two futures so that when one completes, so does the other.
|
||||
diff --git a/Lib/concurrent/futures/_base.py b/Lib/concurrent/futures/_base.py
|
||||
index d98b1ebdd58..f506ce68aea 100644
|
||||
--- a/Lib/concurrent/futures/_base.py
|
||||
+++ b/Lib/concurrent/futures/_base.py
|
||||
@@ -558,6 +558,33 @@ def set_exception(self, exception):
|
||||
self._condition.notify_all()
|
||||
self._invoke_callbacks()
|
||||
|
||||
+ def _get_snapshot(self):
|
||||
+ """Get a snapshot of the future's current state.
|
||||
+
|
||||
+ This method atomically retrieves the state in one lock acquisition,
|
||||
+ which is significantly faster than multiple method calls.
|
||||
+
|
||||
+ Returns:
|
||||
+ Tuple of (done, cancelled, result, exception)
|
||||
+ - done: True if the future is done (cancelled or finished)
|
||||
+ - cancelled: True if the future was cancelled
|
||||
+ - result: The result if available and not cancelled
|
||||
+ - exception: The exception if available and not cancelled
|
||||
+ """
|
||||
+ # Fast path: check if already finished without lock
|
||||
+ if self._state == FINISHED:
|
||||
+ return True, False, self._result, self._exception
|
||||
+
|
||||
+ # Need lock for other states since they can change
|
||||
+ with self._condition:
|
||||
+ # We have to check the state again after acquiring the lock
|
||||
+ # because it may have changed in the meantime.
|
||||
+ if self._state == FINISHED:
|
||||
+ return True, False, self._result, self._exception
|
||||
+ if self._state in {CANCELLED, CANCELLED_AND_NOTIFIED}:
|
||||
+ return True, True, None, None
|
||||
+ return False, False, None, None
|
||||
+
|
||||
__class_getitem__ = classmethod(types.GenericAlias)
|
||||
|
||||
class Executor(object):
|
||||
@ -48,6 +48,7 @@ full_custom_readme: |
|
||||
|
||||
## Versions
|
||||
|
||||
* **11.06.25:** - Release `alpine322` tag.
|
||||
* **05.12.24:** - Release `alpine321` tag.
|
||||
* **07.06.24:** - Release `alpine320` tag.
|
||||
* **07.03.24:** - Initial release.
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user