by spider-rs
Web crawler and scraper for Rust
# Add to your Claude Code skills
git clone https://github.com/spider-rs/spiderWebsite | Guides | API Docs | Examples | Discord
A high-performance web crawler and scraper for Rust. 200-1000x faster than popular alternatives, with HTTP, headless Chrome, and WebDriver rendering in a single library.
cargo install spider_cli
spider --url https://example.com
No comments yet. Be the first to share your thoughts!
[dependencies]
spider = "2"
use spider::tokio;
use spider::website::Website;
#[tokio::main]
async fn main() {
let mut website = Website::new("https://example.com");
website.crawl().await;
println!("Pages found: {}", website.get_links().len());
}
Process each page the moment it's crawled, not after:
use spider::tokio;
use spider::website::Website;
#[tokio::main]
async fn main() {
let mut website = Website::new("https://example.com");
let mut rx = website.subscribe(0).unwrap();
tokio::spawn(async move {
while let Ok(page) = rx.recv().await {
println!("- {}", page.get_url());
}
});
website.crawl().await;
website.unsubscribe();
}
Add one feature flag to render JavaScript-heavy pages:
[dependencies]
spider = { version = "2", features = ["chrome"] }
use spider::features::chrome_common::RequestInterceptConfiguration;
use spider::website::Website;
#[tokio::main]
async fn main() {
let mut website = Website::new("https://example.com")
.with_chrome_intercept(RequestInterceptConfiguration::new(true))
.with_stealth(true)
.build()
.unwrap();
website.crawl().await;
}
Also supports WebDriver (Selenium Grid, remote browsers) and AI-driven automation. See examples for more.