Replies: 1 comment
-
💬 Your Product Feedback Has Been Submitted 🎉 Thank you for taking the time to share your insights with us! Your feedback is invaluable as we build a better GitHub experience for all our users. Here's what you can expect moving forward ⏩
Where to look to see what's shipping 👀
What you can do in the meantime 💻
As a member of the GitHub community, your participation is essential. While we can't promise that every suggestion will be implemented, we want to emphasize that your feedback is instrumental in guiding our decisions and priorities. Thank you once again for your contribution to making GitHub even better! We're grateful for your ongoing support and collaboration in shaping the future of our platform. ⭐ |
Beta Was this translation helpful? Give feedback.
-
Select Topic Area
Product Feedback
Body
Due to some historical reasons, which I cannot fully pin down, GitHub Pages currently returns HTTP 403 when 百度 (Baidu), the most popular search engine in China Mainland, crawls the website. The possible reasons (I found over the Internet) are as follows.
Rejecting indexing queries from Baidu is detrimental to the discoverability of GitHub Pages sites for the vast majority of Chinese Mainlanders (who don't know to use another search engine). In addition, the first reason (crawling frequency) is a non-issue if there is a CDN in front of GitHub Pages with custom domain --- I personally use Cloudflare and GitHub Pages server is rarely consulted even if there are many visits (98.15% of requests from the last 30 days were cached by Cloudflare).
I pledge GitHub to consider allowing Baidu to crawl Pages sites. Possible implementations:
Beta Was this translation helpful? Give feedback.
All reactions