SpiderGo Docs

Architecture

System Overview

SpiderGo is split into three runtime surfaces: a Go API backend, a React dashboard frontend, and this Next.js docs site.

SurfaceResponsibilityKey Tech
BackendAuth, crawl/scrape execution, persistence, API keysGin, GORM, Redis, Colly
FrontendUser flows, dashboards, job history, API key managementReact 19, Redux Toolkit, Axios
DocsDeveloper and product documentationNext.js, @farming-labs/docs

Request Lifecycle

  1. User authenticates via email/password or OAuth.
  2. Backend issues access and refresh tokens as HttpOnly cookies.
  3. Frontend sends credentialed requests to protected routes.
  4. Usecase layer orchestrates crawl/scrape execution.
  5. Results are persisted in PostgreSQL and selectively cached in Redis.

Backend Layer Model

LayerPurposeExample Responsibilities
DeliveryHTTP boundaryRouting, request binding, response formatting
UsecaseBusiness workflowValidate inputs, call repositories/services
RepositoryData accessCRUD for users, keys, results, history
InfrastructureExternal systemsJWT, Redis, OAuth providers, crawler/scraper engine
DomainCore contractsEntities, interfaces, constants

The backend composition root is Delivery/main.go, where config, DB/Redis clients, repositories, services, usecases, and route groups are wired.

Route Topology

Route GroupAuth ModePurpose
/authPublic + CookieRegister/login/oauth/refresh/reset/verify
/auth/meCookieCurrent user profile
/auth/api-keysCookieAPI key lifecycle management
/crawl, /scrape, /historyCookieDashboard job execution and history
/trial/*Public (rate-limited)Demo crawl/scrape endpoints
/v1/*API KeyProgrammatic integration endpoints

Security Architecture

MechanismUsed ByEnforcement
Cookie session authBrowser dashboard routesAuthMiddleware validates access token cookie and injects user context
API key auth/v1/* routesAPIKeyMiddleware validates Bearer key state and quota

Crawl and Scrape Pipeline

ServiceExecution StyleOutput
ScraperSingle-page fetchTitle, description, content, links, product metadata
CrawlerBreadth-first traversal with limitsMulti-page aggregate result (CrawlerResult)

Crawler behavior includes depth/page caps, denied URL filtering, persistence of final output, and Redis caching for repeat seeds.

Frontend Architecture

ConcernImplementation
RoutingReact Router route modules
StateRedux Toolkit slices
API clientAxios with withCredentials and token refresh retry
Auth stateauthSlice (login/signup/verify/reset/keys/profile)
Job statedashboardSlice (crawl/scrape/history/config/results)

Documentation Architecture

File/AreaRole
docs.config.tsxTheme, nav, icons, metadata, AI settings
app/docs/**/page.mdxDocumentation content and order
app/docs/layout.tsxGenerated docs layout wrapper