Salin dan Bagikan
Cara Menggunakan Screaming Frog SEO Spider: Tutorial Lengkap - Panduan lengkap cara menggunakan Screaming Frog SEO Spider untuk crawl website, audit technical SEO, …

Cara Menggunakan Screaming Frog SEO Spider: Tutorial Lengkap

Screaming Frog SEO Spider adalah tool desktop untuk crawling website layaknya search engine. Tool ini essential untuk technical SEO audit.

Download dan Install

Requirements

Minimum:
- RAM: 4GB (8GB+ recommended)
- Windows, Mac, atau Ubuntu
- Java Runtime Environment

Download:
https://www.screamingfrog.co.uk/seo-spider/

Free vs Paid

Free version (500 URLs):
- Basic crawling
- Limited exports
- No scheduling

Paid (£199/year):
- Unlimited crawling
- Full exports
- API integrations
- Scheduling
- Custom extraction

Memulai Crawl

Basic Crawl

1. Open Screaming Frog
2. Enter URL in address bar
3. Click "Start"
4. Wait for completion

Options:
- Spider mode (crawl)
- List mode (check URLs)

Configuration Settings

Configuration → Spider:

Basic tab:
☑ Crawl all subdomains
☑ Follow internal nofollow
☑ Crawl outside of folder

Limits tab:
- Max crawl depth
- Max URLs
- Concurrent threads

Rendering tab:
- JavaScript rendering (if SPA)

Tab Overview

Internal Tab

Shows all internal URLs:
- Address (URL)
- Status Code
- Title
- Meta Description
- H1
- Word Count
- Indexability

Filter options:
- HTML
- JavaScript
- CSS
- Images
- PDF
- etc.

Response Codes Tab

Quick filters:
- 2xx Success
- 3xx Redirects
- 4xx Client Errors
- 5xx Server Errors

Focus on:
- 404 Not Found
- 301/302 Redirects
- 5xx Server Errors

Meta Tab

View all meta data:
- Title (length, pixel width)
- Meta Description
- Meta Keywords (legacy)
- Canonical URL
- Meta Robots

Find issues:
- Missing titles
- Duplicate titles
- Too long/short
- Missing descriptions

Page Titles Tab

Title analysis:
- Over 60 characters
- Below 30 characters
- Missing
- Duplicate
- Multiple H1
- Same as H1

Bulk edit opportunities.

H1/H2 Tabs

Heading analysis:
- Missing H1
- Duplicate H1
- Multiple H1s per page
- H2 structure

Proper heading hierarchy.

Images Tab

Image audit:
- Missing Alt Text
- Alt Over 100 chars
- Oversized images
- Broken images
- Image file size

Export for optimization.

Canonicals Tab

Find issues:
- Missing canonical
- Canonicalised
- Non-indexable canonical
- Canonical mismatch

Self-referencing vs cross-domain.

Directives Tab

Robots directives:
- Index/Noindex
- Follow/Nofollow
- Canonical
- X-Robots-Tag

Indexability status.

Use Cases

Response Codes → 4xx
Export all 404s

Right-click → Inlinks
See pages linking to broken URLs

Fix or redirect.

Find Redirect Chains

Response Codes → Redirects (3xx)
Filter: Redirect Chains

A → B → C (chain)
Should be: A → C

Fix intermediate redirects.

Find Duplicate Content

Page Titles tab:
Filter: Duplicate

Content tab:
Hash (MD5) - exact duplicates
Near duplicates detection

Canonicalize or consolidate.

Find Missing Meta

Page Titles:
Filter: Missing

Meta Description:
Filter: Missing

Export and fix.

Crawl Depth Analysis

Internal tab:
Add column "Crawl Depth"

Pages should be:
- Max 3 clicks from homepage
- Important pages: 1-2 clicks

Restructure if too deep.

Word Count Analysis

Internal tab:
Add column "Word Count"

Filter thin content:
- Below 300 words
- Very thin: below 100

Improve or consolidate.

Custom Extraction

Extract Structured Data

Configuration → Custom → Extraction

Extraction type: CSS Selector

Example (product price):
h1.product-title
span.price

Export specific data.

Extract Schema

Type: XPath
//script[@type='application/ld+json']

Check schema presence.

API Integration

Google Analytics

Configuration → API Access → Google Analytics

Connect GA4:
- Sessions
- Bounce Rate
- Conversions
- etc.

Combine with crawl data.

Search Console

Configuration → API Access → Search Console

Get:
- Impressions
- Clicks
- CTR
- Average Position

Per URL data.

PageSpeed Insights

Configuration → API Access → PageSpeed Insights

Core Web Vitals:
- LCP
- FID/INP
- CLS

Per page scores.

Reports

Pre-built Reports

Reports menu:
- Crawl Overview
- Redirect Chains
- Orphan Pages
- Redirect & Canonical Chains
- Insecure Content
- Pagination

Sitemap Export

Sitemaps → Create XML Sitemap

Options:
- Images
- hreflang
- Last modified
- Priority

Export valid sitemap.

Best Practices

Large Sites

For big sites:
- Increase RAM allocation
- Use database storage mode
- Limit initial crawl
- Crawl sections separately

Configuration → System → Memory

Regular Audits

Schedule crawls:
- Weekly for large sites
- Monthly for smaller
- After major changes

Compare crawls over time.

Combine Data

Export to Excel:
- Crawl data
- GSC data
- GA data

Create comprehensive report.

Quick Audit Checklist

Technical:
☐ Check 4xx errors
☐ Find redirect chains
☐ Verify robots.txt
☐ Check XML sitemap
☐ Review canonical tags

On-Page:
☐ Find missing titles
☐ Find missing descriptions
☐ Check H1 tags
☐ Find missing alt text

Content:
☐ Identify thin content
☐ Find duplicate content
☐ Check word counts
☐ Review crawl depth

Kesimpulan

Screaming Frog adalah tool wajib untuk SEO professional. Free version sudah cukup untuk site kecil, tapi paid version worth it untuk akses API integration dan unlimited crawling.

Artikel Terkait

Link Postingan : https://www.tirinfo.com/cara-menggunakan-screaming-frog-seo-spider/

Hendra WIjaya
Tirinfo
4 minutes.
7 January 2026