Replies: 1 comment
-
💬 Your Product Feedback Has Been Submitted 🎉 Thank you for taking the time to share your insights with us! Your feedback is invaluable as we build a better GitHub experience for all our users. Here's what you can expect moving forward ⏩
Where to look to see what's shipping 👀
What you can do in the meantime 💻
As a member of the GitHub community, your participation is essential. While we can't promise that every suggestion will be implemented, we want to emphasize that your feedback is instrumental in guiding our decisions and priorities. Thank you once again for your contribution to making GitHub even better! We're grateful for your ongoing support and collaboration in shaping the future of our platform. ⭐ |
Beta Was this translation helpful? Give feedback.
-
Select Topic Area
Bug
Body
I maintain the Plan 9 implementation of git.
It looks like there's something we don't do to trigger a keep alive on the GitHub side when using the https protocol, so while smaller repos work fine, and larger repos work fine on fast connections, large repos on slow connections can get disconnected even if they're making progress on the transfer.
The negotiation always happens just fine, and the connection terminates while doing the bulk pack file transfer. The bigger the repository, the flakier things get.
The connection doesn't freeze; it's making solid progress when I get a FIN packet on the tcp connection; In my current pcap, the FIN packet is about 5 milliseconds after the last data packet I recieved.
Hence, my best guess is that there's some action (state updates?) that I need to do to work around a github bug, where they're not actually recording that the connection is active. A pcap of a failing clone is available at https://orib.dev/clonego.pcap
SSH clones are reliable for me.
On a hunch, I implemented the sideband extension, but my users are still reporting that they're getting clone failures.
Beta Was this translation helpful? Give feedback.
All reactions