by lucasygu
Xiaohongshu CLI — Search, Analysis, Automation Xiaohongshu content. Built for AI agents.
# Add to your Claude Code skills
git clone https://github.com/lucasygu/redbookdescription: Search, read, analyze, and automate Xiaohongshu (小红书) content via CLI allowed-tools: Bash, Read, Write, Glob, Grep
name: redbook version: 0.4.0 metadata: openclaw: requires: bins: - redbook install: - kind: node package: "@lucasygu/redbook" bins: [redbook] os: [macos] homepage: https://github.com/lucasygu/redbook tags:
Use the redbook CLI to search notes, read content, analyze creators, automate engagement, and research topics on Xiaohongshu (小红书/RED).
OpenClaw users: Install via clawhub install redbook or npm install -g @lucasygu/redbook.
/redbook search "AI编程" # Search notes
/redbook read <url> # Read a note
/redbook user <userId> # Creator profile
/redbook analyze <userId> # Full creator analysis (profile + posts)
| Intent | Command |
|--------|---------|
| Search notes | redbook search "keyword" --json |
| Read a note | redbook read <url> --json |
| Get comments | redbook comments <url> --json --all |
| Creator profile | redbook user <userId> --json |
| Creator's posts | redbook user-posts <userId> --json |
| Browse feed | redbook feed --json |
| Search hashtags | redbook topics "keyword" --json |
| Analyze viral note | redbook analyze-viral <url> --json |
| Extract content template | redbook viral-template <url1> <url2> --json |
| Post a comment | redbook comment <url> --content "text" |
| Reply to comment | redbook reply <url> --comment-id <id> --content "text" |
| Batch reply (preview) | redbook batch-reply <url> --strategy questions --dry-run |
| Render markdown to cards | redbook render content.md --style xiaohongshu |
| Check connection | redbook whoami |
Always add --json when parsing output programmatically. Without it, output is human-formatted text.
小红书 CLI 工具:搜索笔记、阅读内容、分析博主、发布图文。使用浏览器 Cookie 认证,无需 API Key。
English | 中文
最快上手方式
把这段话发给你的 AI 助手(Claude Code、Cursor、Codex、Windsurf、OpenClaw 等):
"帮我用 npm 安装
@lucasygu/redbook这个小红书 CLI 工具,然后运行redbook whoami验证是否能正常连接。GitHub 地址:https://github.com/lucasygu/redbook"OpenClaw 用户也可以直接:
clawhub install redbookAI 会自动完成安装、验证连接、处理可能的 Cookie 问题。你只需要确保已在 Chrome 中登录 xiaohongshu.com。
安装完成后,试试:"帮我分析'AI编程'这个话题在小红书上的竞争格局" —— AI 会自动搜索关键词、分析互动数据、发现头部博主、给出内容建议。
npm install -g @lucasygu/redbook
# 或通过 ClawHub(OpenClaw 生态)
clawhub install redbook
需要 Node.js >= 22。支持 macOS、Windows、Linux。使用 Chrome 浏览器的 Cookie —— 请先在 Chrome 中登录 xiaohongshu.com。
安装后运行 redbook whoami 验证连接。CLI 会自动检测所有 Chrome 配置文件,找到你的小红书登录状态。
--cookie-string 手动传入No comments yet. Be the first to share your thoughts!
XHS is not Twitter or Instagram. These platform-specific engagement ratios reveal content type and audience behavior.
collected_count / liked_count)XHS's "collect" (收藏) is a save-for-later mechanic — users build personal reference libraries. This ratio is the strongest signal of content utility.
| Ratio | Classification | Meaning | |-------|---------------|---------| | >40% | 工具型 (Reference) | Tutorial, checklist, template — users bookmark for reuse | | 20–40% | 认知型 (Insight) | Thought-provoking but not saved for later | | <20% | 娱乐型 (Entertainment) | Consumed and forgotten — engagement is passive |
comment_count / liked_count)Measures how much a note triggers conversation.
| Ratio | Classification | Meaning | |-------|---------------|---------| | >15% | 讨论型 (Discussion) | Debate, sharing experiences, asking questions | | 5–15% | 正常互动 (Normal) | Typical engagement pattern | | <5% | 围观型 (Passive) | Users like but don't engage further |
share_count / liked_count)Measures social currency — whether users share to signal identity or help others.
| Ratio | Meaning | |-------|---------| | >10% | 社交货币 — people share to signal taste, identity, or help friends | | <10% | Content consumed individually, not forwarded |
| Sort | What It Reveals |
|------|----------------|
| --sort popular | Proven ceiling — the best a keyword can do |
| --sort latest | Content velocity — how much is being posted now |
| --sort general | Algorithm-weighted blend (default) |
| Form | Tendency |
|------|----------|
| 图文 (image-text, type: "normal") | Higher collect rate — users save reference content |
| 视频 (video, type: "video") | Higher like rate — easier to consume passively |
Each module is a composable building block. Combine them for different analysis depths.
Answers: Which keywords have the highest engagement ceiling? Which are saturated vs. underserved?
Commands:
redbook search "keyword1" --sort popular --json
redbook search "keyword2" --sort popular --json
# Repeat for each keyword in your list
Fields to extract from each result's items[]:
items[].note_card.interact_info.liked_count — likes (may use Chinese numbers: "1.5万" = 15,000)items[].note_card.interact_info.collected_count — collectsitems[].note_card.interact_info.comment_count — commentsitems[].note_card.user.nickname — authorHow to interpret:
items[0] likes — the best-performing note for this keyword. This is the proven demand signal.items[0..9] — how well an average top note does.Output: Keyword × engagement table ranked by Top1 ceiling.
| Keyword | Top1 Likes | Top10 Avg | Top1 Collects | Collect/Like | |---------|-----------|-----------|---------------|-------------| | keyword1 | 12,000 | 3,200 | 5,400 | 45% | | keyword2 | 8,500 | 4,100 | 1,200 | 14% |
Answers: Which topic × scene intersections have demand? Where are the content gaps?
Commands:
# Combine base topic with scene/angle keywords
redbook search "base topic + scene1" --sort popular --json
redbook search "base topic + scene2" --sort popular --json
redbook search "base topic + scene3" --sort popular --json
Fields to extract: Same as Module A — Top1 liked_count for each combination.
How to interpret:
Output: Base × Scene heatmap.
scene1 scene2 scene3 scene4
base topic ████ 8K ██ 2K ████ 12K ░░ 200
Answers: What type of content is each keyword? Reference, insight, or entertainment?
Commands: Use search results from Module A, or for a single note:
redbook analyze-viral "<noteUrl>" --json
Fields to extract:
interact_info fieldsanalyze-viral: use pre-computed engagement.collectToLikeRatio, engagement.commentToLikeRatio, engagement.shareToLikeRatioHow to interpret: Apply the ratio benchmarks from XHS Platform Signals above.
Output: Per-keyword or per-note classification.
| Keyword | Collect/Like | Comment/Like | Type | |---------|-------------|-------------|------| | keyword1 | 45% | 8% | 工具型 + 正常互动 | | keyword2 | 12% | 22% | 娱乐型 + 讨论型 |
Answers: Who are the key creators in this niche? What are their strategies?
Commands:
# 1. Collect unique user_ids from search results across keywords
# Extract from items[].note_card.user.user_id
# 2. For each creator:
redbook user "<userId>" --json
redbook user-posts "<userId>" --json
Fields to extract:
user: interactions[] where type === "fans" → follower countuser-posts: notes[].interact_info.liked_count for all posts → compute avg, median, maxuser-posts: notes[].display_title → content patterns, posting frequencyHow to interpret:
Output: Creator comparison table.
| Creator | Followers | Avg Likes | Median | Max | Posts | Style | |---------|----------|-----------|--------|-----|-------|-------| | @creator1 | 12万 | 3,200 | 1,800 | 45,000 | 89 | Tutorial | | @creator2 | 5.4万 | 8,100 | 6,500 | 22,000 | 34 | Story |
Answers: Do image-text or video notes perform better for this topic?
Commands:
redbook search "keyword" --type image --sort popular --json
redbook search "keyword" --type video --sort popular --json
Fields to extract:
liked_count and collected_count between the two result setstype field: "normal" = image-text, "video" = videoOutput: Form × engagement table.
| Form | Top1 Likes | Top10 Avg | Collect/Like | |------|-----------|-----------|-------------| | 图文 | 8,000 | 2,400 | 42% | | 视频 | 15,000 | 5,100 | 18% |
Answers: Which keywords should I target? Where is the best effort-to-reward ratio?
Input: Keyword matrix from Module A.
Scoring logic:
Tier thresholds (based on Top1 likes):
| Tier | Top1 Likes | Meaning | |------|-----------|---------| | S | >100,000 (10万+) | Massive demand — hard to compete but huge upside | | A | 20,000–100,000 | Strong demand — competitive but winnable | | B | 5,000–20,000 | Moderate demand — good for growing accounts | | C | <5,000 | Niche — low competition, low ceiling |
Output: Tiered keyword list.
| Tier | Keyword | Top1 | Competition | Opportunity | |------|---------|------|-------------|------------| | A | keyword1 | 45K | Medium (6/10 >1K) | High | | B | keyword3 | 12K | Low (2/10 >1K) | Very High | | S | keyword2 | 120K | High (10/10 >1K) | Medium |
Answers: Who is the audience for this niche? What do they want?
Input: Engagement ratios from Module C + comment themes from analyze-viral + content patterns.
Fields to extract from analyze-viral JSON:
comments.themes[] — recurring phrases and keywords from comment sectioncomments.questionRate — % of comments that are questions (learning intent)engagement.collectToLikeRatio — save behavior signals intenthook.hookPatterns[] — what title patterns attract this audienceInference rules:
Output: Audience persona summary — demographics, intent, content preferences.
Answers: What specific content should I create, backed by data?
Input: Opportunity scores (Module F) + audience persona (Module G) + heatmap gaps (Module B).
For each content idea, specify:
hookPatterns that work for this nicheOutput: Ranked content ideas with data backing.
| # | Keyword | Hook Angle | Type | Target Likes | Reference | |---|---------|-----------|------|-------------|-----------| | 1 | keyword3 | "N个方法..." (List) | 工具型 图文 | 5K+ | [top note URL] | | 2 | keyword1 | "为什么..." (Question) | 认知型 视频 | 10K+ | [top note URL] |
Answers: Which comments deserve a reply? What is the comment quality distribution?
Commands:
# 1. Fetch all comments
redbook comments "<noteUrl>" --all --json
# 2. Preview reply candidates (dry run)
redbook batch-reply "<noteUrl>" --strategy questions --dry-run --json
# 3. Execute replies with template (5 min delay with ±30% jitter)
redbook batch-reply "<noteUrl>" --strategy questions \
--template "感谢提问!关于{content},..." \
--max 10
Fields to extract from --dry-run JSON:
candidates[].commentId — target commentscandidates[].isQuestion — boolean, detected questioncandidates[].likes — engagement signalcandidates[].hasSubReplies — whether already answeredskipped — how many comments were filtered outtotalComments — total fetchedStrategies:
questions — replies to comments ending with ? or ? (learning-oriented audience)top-engaged — replies to highest-liked comments (maximum visibility)all-unanswered — replies to comments with no existing sub-replies (fill gaps)How to interpret:
Safety: Hard cap 30 replies per batch, minimum 3-minute delay with ±30% jitter (default 5 min), --dry-run by default (no template = preview only), immediate stop on captcha. See Rate Limits & Safety for details.
Output: Reply plan table with candidate comments, strategy match reason, and status.
Answers: What structural template can I extract from successful notes to guide new content creation?
Commands:
# 1. Find top notes for a keyword
redbook search "keyword" --sort popular --json
# 2. Extract structural template from 2-3 top performers
redbook viral-template "<url1>" "<url2>" "<url3>" --json
Fields to extract from viral-template JSON:
dominantHookPatterns[] — hook types appearing in majority of notestitleStructure.commonPatterns[] — specific title formulatitleStructure.avgLength — target title lengthbodyStructure.lengthRange — target word count [min, max]bodyStructure.paragraphRange — target paragraph countengagementProfile.type — reference/insight/entertainmentaudienceSignals.commonThemes[] — what the audience talks aboutHow to interpret:
Composition with other modules:
Output: Content template spec — the structural skeleton for content creation. An LLM (via the composed workflow) uses this template to generate actual title, body, hashtags, and cover image prompt.
Answers: How should I manage ongoing engagement with my audience?
This module is a workflow that composes Modules I and J with human oversight.
Workflow:
redbook comments "<myNoteUrl>" --all --json to fetch recent commentsredbook batch-reply --strategy questions --dry-run to identify reply candidatesredbook batch-reply --strategy questions --template "..." --max 10Safety rules:
--dry-run first, human approval before executionhasSubReplies)Anti-spam guidelines:
Answers: How do I turn markdown content into Xiaohongshu-ready image cards?
Commands:
# Render markdown to styled PNG cards
redbook render content.md --style xiaohongshu
# Custom style and output directory
redbook render content.md --style dark --output-dir ./cards
# JSON output (for programmatic use)
redbook render content.md --json
Input: Markdown file with YAML frontmatter:
---
emoji: "🚀"
title: "5个AI效率技巧"
subtitle: "Claude Code 实战"
---
## 技巧一:...
Content here...
---
## 技巧二:...
More content...
Output: cover.png + card_1.png, card_2.png, ... in the same directory.
Card specs:
Pagination modes:
auto (default) — smart split on heading/paragraph boundaries using character-count heuristicseparator — manual split on --- in markdownHow to interpret:
puppeteer-core) — no browser download neededredbook post --images cover.png card_1.png ...Dependencies: Requires puppeteer-core and marked (optional, install with npm install -g puppeteer-core marked).
Composition with other modules:
redbook post --images for publishingCombine modules for different analysis depths.
Modules: A → C → F
Search 3–5 keywords, classify engagement type, rank opportunities. Good for quickly validating whether a niche is worth deeper research.
Modules: A → B → E → F → H
Build keyword matrix, map topic × scene intersections, check content form performance, score opportunities, brainstorm specific content ideas.
Modules: A → D
Find who dominates a niche and study their content strategy, posting frequency, and engagement patterns.
Modules: A → B → C → D → E → F → G → H
The comprehensive playbook — keyword landscape, cross-topic heatmap, engagement signals, creator profiles, content form analysis, opportunity scoring, audience personas, and data-backed content ideas.
Command: redbook analyze-viral "<url>" --json
No module composition needed — analyze-viral returns hook analysis, engagement ratios, comment themes, author baseline comparison, and a 0-100 viral score in one call.
# 1. Find top notes
redbook search "keyword" --sort popular --json
# 2. Extract template from top 3 notes (replaces manual synthesis)
redbook viral-template "<url1>" "<url2>" "<url3>" --json
viral-template automates what previously required manual synthesis across analyze-viral results. It outputs a ContentTemplate JSON that captures dominant hooks, body structure ranges, engagement profile, and audience signals.
Modules: I
Single-module workflow for managing comment engagement on your notes. Use batch-reply --dry-run to audit, then execute with a template.
Modules: A → J → H → L
Keyword research → viral template extraction → data-backed content brainstorm → render to image cards. The template provides structural constraints that guide Module H's content ideas. Module L renders the final markdown to XHS-ready PNGs.
Modules: A → J → H → L → post
The full pipeline: research keywords → extract viral template → brainstorm content → write markdown → render to styled image cards → publish via redbook post --images cover.png card_1.png ...
Modules: A → C → I → J → K
Comprehensive automation playbook — keyword analysis, engagement classification, comment operations, viral replication templates, and engagement automation workflow.
XHS enforces aggressive anti-spam (风控) that detects automated behavior through device fingerprinting, activity ratio monitoring, and timing pattern analysis. The CLI applies safe defaults based on platform research.
| Action | Safe Interval | CLI Default | Hard Cap | |--------|--------------|-------------|----------| | Post a note | 3-4 hours (2-3 notes/day max) | N/A (manual) | — | | Comment | ≥3 minutes | N/A (manual) | — | | Reply | ≥3 minutes | N/A (manual) | — | | Batch reply delay | ≥3 minutes | 5 min ±30% jitter | — | | Batch reply count | — | 10 | 30 |
post, comment, and reply commands display safe interval reminders after each action.--dry-run first, review candidates, then execute--delay below 180000 (3 min)post commands 3-4 hours apart (2-3 notes/day maximum)The following operations work reliably via API:
The following operations are unreliable via API (frequently trigger captcha):
--private for higher success rate)The following operations require browser automation (not supported by this CLI):
redbook search <keyword>Search for notes by keyword. Returns note titles, URLs, likes, author info.
redbook search "Claude Code教程" --json
redbook search "AI编程" --sort popular --json # Sort: general, popular, latest
redbook search "Cursor" --type image --json # Type: all, video, image
redbook search "MCP Server" --page 2 --json # Pagination
Options:
--sort <type>: general (default), popular, latest--type <type>: all (default), video, image--page <n>: Page number (default: 1)redbook read <url>Read a note's full content — title, body text, images, likes, comments count.
redbook read "https://www.xiaohongshu.com/explore/abc123" --json
Accepts full URLs or short note IDs. Falls back to HTML scraping if API returns captcha.
redbook comments <url>Get comments on a note. Use --all to fetch all pages.
redbook comments "https://www.xiaohongshu.com/explore/abc123" --json
redbook comments "https://www.xiaohongshu.com/explore/abc123" --all --json
redbook user <userId>Get a creator's profile — nickname, bio, follower count, note count, likes received.
redbook user "5a1234567890abcdef012345" --json
The userId is the hex string from the creator's profile URL.
redbook user-posts <userId>List all notes posted by a creator. Returns titles, URLs, likes, timestamps.
redbook user-posts "5a1234567890abcdef012345" --json
redbook feedBrowse the recommendation feed.
redbook feed --json
redbook topics <keyword>Search for topic hashtags. Useful for finding trending topics to attach to posts.
redbook topics "Claude Code" --json
redbook analyze-viral <url>Analyze why a viral note works. Returns a deterministic viral score (0–100).
redbook analyze-viral "https://www.xiaohongshu.com/explore/abc123" --json
redbook analyze-viral "https://www.xiaohongshu.com/explore/abc123" --comment-pages 5
Options:
--comment-pages <n>: Comment pages to fetch (default: 3, max: 10)JSON output structure:
Returns { note, score, hook, content, visual, engagement, comments, relative, fetchedAt }.
score.overall (0–100) — composite of hook (20) + engagement (20) + relative (20) + content (20) + comments (20)hook.hookPatterns[] — detected title patterns (Identity Hook, Emotion Word, Number Hook, Question, etc.)engagement — likes, comments, collects, shares + ratios (collectToLikeRatio, commentToLikeRatio, shareToLikeRatio)relative.viralMultiplier — this note's likes / author's median likesrelative.isOutlier — true if viralMultiplier > 3comments.themes[] — top recurring keyword phrases from commentsredbook viral-template <url> [url2] [url3]Extract a reusable content template from 1-3 viral notes. Analyzes each note (same pipeline as analyze-viral) and synthesizes common structural patterns.
redbook viral-template "<url1>" "<url2>" "<url3>" --json
redbook viral-template "<url1>" --comment-pages 5 --json
Options:
--comment-pages <n>: Comment pages to fetch per note (default: 3, max: 10)JSON output structure:
Returns { dominantHookPatterns, titleStructure, bodyStructure, engagementProfile, audienceSignals, sourceNotes, generatedAt }.
dominantHookPatterns[] — hook types appearing in majority of input notestitleStructure.avgLength — average title length across notesbodyStructure.lengthRange — [min, max] body lengthengagementProfile.type — "reference" / "insight" / "entertainment"audienceSignals.commonThemes[] — merged comment themes across notesredbook comment <url>Post a top-level comment on a note.
redbook comment "<noteUrl>" --content "Great post!" --json
Options:
--content <text> (required): Comment textredbook reply <url>Reply to a specific comment on a note.
redbook reply "<noteUrl>" --comment-id "<commentId>" --content "Thanks for asking!" --json
Options:
--comment-id <id> (required): Comment ID to reply to (from comments --json output)--content <text> (required): Reply textredbook batch-reply <url>Reply to multiple comments using a filtering strategy. Always preview with --dry-run first.
# Preview which comments match the strategy
redbook batch-reply "<noteUrl>" --strategy questions --dry-run --json
# Execute replies with a template (default 5 min delay with jitter)
redbook batch-reply "<noteUrl>" --strategy questions \
--template "感谢提问!{content}" --max 10
Options:
--strategy <name>: questions (default), top-engaged, all-unanswered--template <text>: Reply template with {author}, {content} placeholders--max <n>: Max replies (default: 10, hard cap: 30)--delay <ms>: Delay between replies in ms (default: 300000 / 5 min, min: 180000 / 3 min). ±30% random jitter applied automatically.--dry-run: Preview candidates without posting (default when no template)Safety: Stops immediately on captcha. No template = dry-run only. Delays include random jitter to avoid uniform timing patterns that trigger XHS bot detection.
redbook render <file>Render a markdown file with YAML frontmatter into styled PNG image cards. Uses the user's existing Chrome installation — no browser download needed.
redbook render content.md --style xiaohongshu
redbook render content.md --style dark --output-dir ./cards
redbook render content.md --pagination separator --json
Options:
--style <name>: purple, xiaohongshu (default), mint, sunset, ocean, elegant, dark--pagination <mode>: auto (default), separator (split on ---)--output-dir <dir>: Output directory (default: same as input file)--width <n>: Card width in px (default: 1080)--height <n>: Card height in px (default: 1440)--dpr <n>: Device pixel ratio (default: 2)Requires: puppeteer-core and marked (npm install -g puppeteer-core marked). Does NOT require XHS cookies — purely offline rendering.
Override Chrome path: Set CHROME_PATH environment variable if Chrome is not in the standard location.
redbook whoamiCheck connection status. Verifies cookies are valid and shows the logged-in user.
redbook whoami
redbook post (Limited)Publish an image note. Frequently triggers captcha (type=124) on the creator API. Image upload works, but the publish step is unreliable. For posting, consider using browser automation instead.
redbook post --title "标题" --body "正文" --images cover.png --json
redbook post --title "测试" --body "..." --images img.png --private --json
Options:
--title <title>: Note title (required)--body <body>: Note body text (required)--images <paths...>: Image file paths (required, at least one)--topic <keyword>: Search and attach a topic hashtag--private: Publish as private noteAll commands accept:
--cookie-source <browser>: chrome (default), safari, firefox--chrome-profile <name>: Chrome profile directory name (e.g., "Profile 1"). Auto-discovered if omitted.--json: Output as JSONThe XHS API requires a valid xsec_token to fetch note content. Without it, read, comments, and analyze-viral return {}.
Key rules:
?xsec_token=... from a previous session will return {}. Never cache or reuse old URLs.search always returns fresh tokens. Every item in search results includes a valid xsec_token for that note.{}. Running redbook read <noteId> without a token almost always fails.The correct workflow — always search first:
# WRONG — stale URL or bare noteId, will likely return {}
redbook read "689da7b0000000001b0372c6" --json
redbook read "https://www.xiaohongshu.com/explore/689da7b0?xsec_token=OLD_TOKEN" --json
# RIGHT — search first, then use the fresh URL with token
redbook search "AI编程" --sort popular --json
# Extract the noteId + xsec_token from search results, then:
redbook read "https://www.xiaohongshu.com/explore/<noteId>?xsec_token=<freshToken>" --json
For agents: When the user gives a bare XHS note URL (no xsec_token param), extract the noteId from the URL path, search for the note title or noteId to get a fresh token, then use the full URL with the fresh token.
How to extract fresh URLs from search results (JSON):
# Each search result item has: { id: "noteId", xsec_token: "...", note_card: { ... } }
# Build the URL: https://www.xiaohongshu.com/explore/{id}?xsec_token={xsec_token}
Commands that need xsec_token: read, comments, analyze-viral
Commands that do NOT need xsec_token: search, user, user-posts, feed, whoami, topics
The XHS API returns abbreviated numbers with Chinese unit suffixes:
| API value | Actual number |
|-----------|---------------|
| "1.5万" | 15,000 |
| "2.4万" | 24,000 |
| "1.2亿" | 120,000,000 |
| "115" | 115 |
万 = ×10,000. 亿 = ×100,000,000. Numbers under 10,000 are plain integers as strings.
The analyze-viral command handles this automatically. When parsing --json output manually, watch for these suffixes in interact_info fields (liked_count, collected_count, etc.).
| Error | Meaning | Fix |
|-------|---------|-----|
| {} empty response | Missing or expired xsec_token | Search first to get a fresh token |
| "No 'a1' cookie" | Not logged into XHS in browser | Log into xiaohongshu.com in Chrome |
| "Session expired" | Cookie too old | Re-login in Chrome |
| "NeedVerify" / captcha | Anti-bot triggered | Wait and retry, or reduce request frequency |
| "IP blocked" (300012) | Rate limited | Wait or switch network |
When producing analysis reports, use these formats:
Data tables: Markdown tables with exact field mappings. Always include the metric unit.
Heatmaps: ASCII bar charts for cross-topic comparison:
职场 生活 教育 创业
AI编程 ████ 8K ██ 2K ████ 12K ░░ 200
Claude Code ██ 3K ░░ 100 ██ 4K █ 1K
Creator comparison: Structured table with both quantitative metrics and qualitative style assessment.
Final reports: Use this section order:
import { XhsClient } from "@lucasygu/redbook";
import { loadCookies } from "@lucasygu/redbook/cookies";
const cookies = await loadCookies("chrome");
const client = new XhsClient(cookies);
const results = await client.searchNotes("AI编程", 1, 20, "popular");
const topics = await client.searchTopics("Claude Code");
--cookie-source)puppeteer-core and marked (npm install -g puppeteer-core marked). Uses your existing Chrome — no additional browser download.通过 AI 助手使用时,这些工作流可以自动串联完成。直接使用 CLI 时,每个命令也可以独立运行。
# 检查连接
redbook whoami
# 搜索笔记
redbook search "AI编程" --sort popular
# 阅读笔记
redbook read https://www.xiaohongshu.com/explore/abc123
# 获取评论
redbook comments https://www.xiaohongshu.com/explore/abc123 --all
# 浏览推荐页
redbook feed
# 查看博主信息
redbook user <userId>
redbook user-posts <userId>
# 搜索话题标签
redbook topics "Claude Code"
# 分析爆款笔记
redbook analyze-viral https://www.xiaohongshu.com/explore/abc123
# 从多篇爆款提取内容模板
redbook viral-template "<url1>" "<url2>" "<url3>" --json
# 发评论
redbook comment "<noteUrl>" --content "写得好!"
# 回复评论
redbook reply "<noteUrl>" --comment-id "<id>" --content "感谢提问!"
# 按策略批量回复(先预览再执行)
redbook batch-reply "<noteUrl>" --strategy questions --dry-run
redbook batch-reply "<noteUrl>" --strategy questions --template "感谢!{content}" --max 10
# 将 Markdown 渲染为图文卡片(需要可选依赖)
redbook render content.md --style xiaohongshu
redbook render content.md --style dark --output-dir ./cards
# 发布图文笔记
redbook post --title "标题" --body "正文内容" --images cover.png
redbook post --title "测试" --body "..." --images img.png --private
| 命令 | 说明 |
|------|------|
| whoami | 查看当前登录账号 |
| search <关键词> | 搜索笔记 |
| read <url> | 阅读单篇笔记 |
| comments <url> | 获取笔记评论 |
| user <userId> | 查看用户资料 |
| user-posts <userId> | 列出用户所有笔记 |
| feed | 获取推荐页内容 |
| post | 发布图文笔记(易触发验证码,详见下方说明) |
| topics <关键词> | 搜索话题/标签 |
| analyze-viral <url> | 分析爆款笔记(钩子、互动、结构) |
| viral-template <url...> | 从 1-3 篇爆款笔记提取内容模板 |
| comment <url> | 发表评论 |
| reply <url> | 回复指定评论 |
| batch-reply <url> | 按策略批量回复评论(支持预览模式) |
| render <文件> | Markdown 渲染为小红书图文卡片 PNG(需可选依赖) |
| 选项 | 说明 | 默认值 | |------|------|...