Five production-grade AI content tools, each powered by your own OpenAI or Gemini API key. Generate bulk meta tags, Schema.org JSON-LD, city-localised landing pages, and competitive counter-content — all from your GEO audit data.
Every Innotek toolkit uses your own OpenAI or Google Gemini API key — stored encrypted with AES-256-GCM in your account, never shared. This means:
You pay OpenAI or Google directly at their published rates. Innotek charges nothing per generation.
Your API key is encrypted with AES-256-GCM before storage. We never log or expose the raw key.
No credit limits, no throttling. Generate as many meta tags, schemas, or localised pages as your API key allows.
Every generation is informed by your GEO audit data — entity clarity scores, llms.txt context, and per-page metrics.
Connect your keys in Settings → Toolkits → Integrations. OpenAI keys (format: sk-...) power the Bulk Meta, Bulk Schema, and Counter-Measure tools. Google Gemini keys (format: AIza...) power the Localizer tool.
The Innotek Invite Tool gives you a branded, token-secured invitation system for onboarding clients, running invite-only betas, or managing partner access to your GEO dashboard. Built on SendGrid with unique per-invite token URLs, 7-day expiry, and admin-scoped access control.
Enter the recipient's email address and an optional note (e.g. "Hi Sarah, here's your access to our GEO dashboard trial"). The system extracts the recipient's first name automatically.
A unique 24-character hex token (format: tk_inv_xxxxxxxxxxxxxxxxxxxxxxxx) is generated and stored in the database. A branded SendGrid email is dispatched with a personalised invitation link containing the token.
The link points to your registration page with the token pre-filled. After the recipient registers, the token is marked as accepted and their account is linked to the invitation.
The Invite Table in your dashboard shows each invite's status (Pending, Accepted, Expired). Copy the invite URL to share via Slack or email, resend if the email was missed, or revoke an invite before it's accepted.
Send each new client a personalised invite link. No manual account setup — they register themselves and land directly in the platform.
Run a controlled beta with a fixed number of testers. Each token is single-use, so you control exactly who gets in.
Give agency partners their own invite allocation. They send invites to their clients under their account umbrella.
Batch-invite users from a waitlist. Rate limiting (20 invites/hour) prevents abuse and keeps your SendGrid sender reputation clean.
Most bulk meta tag tools generate generic titles and descriptions that ignore your brand's specific context, GEO audit findings, and entity clarity gaps. Innotek's Bulk Meta Tag Optimizer is different: it feeds your per-page GEO audit data — entity clarity score, llms.txt summary, premium AI readiness metrics, and top GEO recommendations — directly into OpenAI gpt-4o-mini alongside the raw page markdown. The result is titles and descriptions that are not just keyword-optimised, but GEO-informed.
Choose any completed GEO analysis from your dashboard. The tool shows all crawled pages with their URL, character count, and GEO grade.
For each page, the tool fetches: (a) site-level GEO context (domain, grade, entity clarity, llms.txt), (b) per-page GEO audit data (entity clarity score, AI readiness metrics, top recommendations), and (c) the full page markdown from Supabase Storage. All three are combined into a structured prompt.
The model receives audit-driven instructions based on your specific scores: entity clarity <9 triggers "name the brand explicitly in the title"; technical SEO <6 triggers "front-load the primary keyword"; trust signals <6 triggers "pull a credential or certification from the page content". Temperature 0.2 for consistency.
Each title (≤60 chars) and description (≤160 chars) is displayed with a colour-coded character badge: green = within range, amber = close, red = over limit. A "GEO-informed ✦" badge appears when per-page audit data was used. Entity clarity score shown per row.
Copy individual tags with one click, or export the full batch as a CSV file with columns: URL, Title, Description, Char counts, GEO Grade, Entity Clarity.
Refresh all meta tags across a 50-page site in one batch. Far more accurate than regex-based bulk editors because each output is grounded in the actual page content.
Meta tags go stale. Run the optimizer on last year's top pages, export the CSV, and hand it to your dev team as a structured implementation ticket.
Generate 2–3 title variants per page (run the optimizer multiple times with slight temperature variations), then A/B test click-through rates in Search Console.
Produce a professional CSV deliverable for each client showing current vs. recommended meta tags, with GEO grade and entity clarity as justification columns.
Requires an OpenAI API key (format: sk-...). Connect it in Settings → Toolkits → Integrations. The tool uses gpt-4o-mini at temperature 0.2 — approximately 800–1,200 tokens per page ($0.00012–$0.00018 per page at current OpenAI pricing). A 50-page site costs under $0.01 to process.
Schema.org hallucination is a real problem: AI-generated JSON-LD that looks correct but contains invented properties, wrong @type values, or missing required fields. The Innotek Bulk Schema.org Fixer eliminates hallucination by grounding every generation in your actual page content (downloaded from Supabase Storage) and your GEO audit findings. It validates @context and @type at the API level before returning any result.
Choose any completed GEO analysis. The tool lists all crawled pages with their URL, char count, and existing schema grade.
The API fetches three inputs: (a) site context (base URL, GEO grade, entity clarity score, llms.txt summary), (b) per-page GEO data (entity clarity, AI summary, top recommendations), and (c) the full page markdown from your Supabase Storage bucket.
The model is called with response_format: { type: 'json_object' } at temperature 0.1, instructed to select the most appropriate Schema.org @type for the page content and generate a complete, valid JSON-LD block. The system prompt specifies: never invent properties; only use Schema.org v25 documented properties.
The API validates that the response contains @context: "https://schema.org" and a non-empty @type string. If validation fails (e.g. the model returned a wrapper object instead of the schema directly), the request returns a 502 error rather than invalid JSON-LD.
Each schema block appears in an expandable row with the @type badge (e.g. "Product", "FAQPage", "Article") and a GEO-informed indicator. Copy to clipboard with one click, or export the full batch as a JSONL file (one JSON object per line) for bulk deployment scripts.
Every JSON-LD block is grounded in your actual page markdown. The model cannot invent a product SKU or address it didn't read from the source content.
When entity clarity is low (<9), the model prioritises Organization and Person schema types to improve AI disambiguation. When schema completeness is low, it targets the highest-impact missing properties first.
Generate Product, Offer, and AggregateRating schemas for every product page in your crawl. JSONL export makes it trivial to import into any e-commerce platform.
Automatically detects article, blog post, and guide pages and generates Article or TechArticle schema with correct author, publisher, and datePublished fields.
Requires an OpenAI API key (format: sk-...). Uses gpt-4o-mini at temperature 0.1 with response_format: { type: "json_object" }. Approximately 1,000–1,500 tokens per page ($0.00015–$0.00023 per page). A 50-page site costs under $0.012.
Programmatic SEO at city-level scale is one of the highest-ROI strategies for service-area businesses — but it's time-consuming to do well. Thin, auto-generated "We also serve [City]" pages are penalised by AI agents as low-quality. The Innotek Localizer uses Gemini 2.0/2.5 Flash to clone a source landing page into genuinely localised variants, injecting local business names, landmarks, neighbourhood references, and address variants into the content at a semantic level.
Choose a completed GEO analysis (your site) and select one of your crawled pages as the source template — typically your primary service or landing page.
Paste a comma-separated list of cities (e.g. "London, Manchester, Birmingham, Leeds, Bristol"). There's no hard limit — generate 50 city variants in a single batch if needed.
Choose from the available Gemini models grouped by tier: ⚡ Fast (gemini-2.0-flash-lite, gemini-1.5-flash-8b), ⚖️ Balanced (gemini-2.5-flash, gemini-1.5-pro), or 🔮 Powerful (gemini-2.5-pro). The tool shows token context windows and cost classification per model.
For each city, the prompt instructs Gemini to: (a) replace all city/region references in the source content with the target city, (b) inject at least 3 local entity signals (a local landmark, the city council or authority, a known local business district), (c) adapt any address or contact information to the target city, and (d) maintain the same structure, tone, and fact density as the source page.
Each city variant is displayed in a collapsible card with its city name as the header. Copy individual variants or click "Export ZIP" to download all variants as separate Markdown files (e.g. london.md, manchester.md) in a single ZIP archive.
A plumber covering 20 postcodes. A law firm with 8 office locations. A cleaning company serving every borough in London. One template, 20 city variants, in under 5 minutes.
Generate city-level landing pages for each franchise location. Each page has genuine local signals — the franchisee's suburb, nearby landmarks, local trading estate — not just the city name swapped in.
Build a content hub with a city-specific cluster page for every major UK city. Internal link from a national overview page to each city variant, creating a semantic location authority structure.
Localize a product landing page for 50 cities in a new market. The Gemini model understands international city context — local currencies, regulatory bodies, neighbourhood names — without explicit instruction.
Requires a Google Gemini API key (format: AIza...). Connect it in Settings → Toolkits → Integrations. Default model: gemini-2.5-flash (fast and cost-effective). Approximately 2,000–4,000 tokens per city variant at temperature 0.4.
Your Innotek Premium Research analysis identifies up to 5 specific competitor content gaps — areas where your competitors are outranking you in AI citations and your content is either absent or weak. The Counter-Measure Generator converts each gap into a ~2,000-word, publish-ready blog post using a two-pass AI generation pipeline: first a Markdown content brief (Pass 1), then a branded dark-mode HTML wrapper (Pass 2). The output is genuinely GEO-informed — not a generic article, but a targeted counter-measure designed to close a specific competitive gap.
The tool only shows analyses where the premium Gemini research pipeline has completed (current_step = 'done'). This ensures competitor gap data is available.
The tool fetches up to 5 competitor content gaps from gemini_technical_seo_notes in your premium research record, plus the full deepsearch_content summary as context. Each gap is displayed as a PostCard with a GAP badge.
Choose OpenAI (gpt-4o, gpt-4o-mini, o1-mini, etc.) or Google Gemini (gemini-2.5-flash, gemini-2.5-pro, etc.), then select a specific model. Both providers are shown simultaneously if you have both keys connected.
The first AI call generates a structured Markdown blog post at temperature 0.7 (creative). The prompt instructs the model to: include an H1 title, multiple H2/H3 sections, a data table, numbered recommendations, and maintain a minimum fact density of 20+ verifiable claims per 1,000 words. The gap text from your competitor research is the primary instruction.
The second AI call takes the Markdown output and wraps it in a production-ready dark-mode HTML page at temperature 0.2 (precise). The HTML uses a consistent Innotek brand style: background #0d1117, accent colour #00e5ff, responsive typography. If Pass 2 fails or produces invalid HTML, the system automatically falls back to a minimal clean HTML wrapper.
Each PostCard shows: word count, a ▼ View toggle for the Markdown preview, a ⬡ HTML button that opens the branded HTML in a new browser tab (via Blob URL), and a Copy button for the Markdown. Click Generate All to process all 5 gaps sequentially, or Stop mid-batch. Export ZIP downloads all 5 posts as .md + .html file pairs.
Your competitor ranks for "entity clarity score" and you don't. Instead of writing a generic article, the Counter-Measure Generator produces a post specifically designed to out-cite and out-fact-density the competitor page.
Every generated post is instructed to maintain high fact density (20+ verifiable claims per 1,000 words), use clear H2/H3 hierarchy, and include a data table — the exact signals that improve AI citation rates.
The Pass 2 HTML output is publish-ready. Drop it into your CMS, ghost, or static site generator. The branded dark-mode style can be customised by editing the HTML before publishing.
Click "Generate All" to process all 5 competitor gaps sequentially in the background. The Stop button lets you cancel mid-batch if the first 2–3 posts are sufficient for your sprint.
Requires an OpenAI API key (sk-...) or a Google Gemini API key (AIza...). Connect either or both in Settings → Toolkits → Integrations. Requires an Enterprise plan (for access to premium research). Each two-pass generation uses approximately 4,000–6,000 tokens for Pass 1 and 6,000–10,000 tokens for Pass 2. The maxDuration=60 API timeout accommodates both passes in a single request.
Every BYOK toolkit generates content from crawled site data and GEO audit scores — but if you're working with agency clients whose websites have thin content, the generated output can be similarly thin. The Brand Knowledge Base solves this by letting you upload a brand guidelines PDF once. Gemini 2.0 Flash reads the document and extracts tone of voice, target audience personas, brand values, and communication rules into structured prose. From that point on, every generation in every toolkit automatically includes this brand context, invisibly overriding generic assumptions with client-specific guidance.
Navigate to Toolkits → Brand Knowledge Base section. Upload any PDF up to 10 MB — brand guidelines, tone-of-voice documents, audience personas, or communication rulebooks. Only PDF format is accepted.
The platform Gemini key reads your PDF and extracts: tone of voice, target audience, brand values, USPs, and communication rules. The result is 400–600 words of plain prose stored against your account. No BYOK key required — this uses the platform Gemini integration.
Every subsequent generation in Bulk Meta, Bulk Schema, Localizer, and Counter-Measure now receives a "BRAND KNOWLEDGE BASE" block in the prompt. The AI is instructed to apply the tone of voice, audience context, and communication rules to every piece of content it writes.
If the brand guidelines change, upload a new PDF. The old storage file is automatically deleted and replaced. Up to 3,000 characters of the extracted text are injected per generation — capped to stay within typical model context budgets.
Upload the client's brand guidelines PDF on their first project. Every meta tag, schema, and localised page generated from their GEO audit will automatically reflect their brand voice — not generic SEO copy.
If a brand has specific language rules (e.g. "never say 'solutions', use 'tools' instead"), define them in the guidelines doc. The Brand Knowledge Base ensures these rules propagate to every generated piece.
Upload a document that defines the primary audience segment — demographics, pain points, terminology. Generated content will address that audience specifically rather than defaulting to a generic reader.
Each user account has one Brand Knowledge Base. In an agency workflow, team members operate separate accounts or upload a new PDF per client sprint — replacing the previous guidelines cleanly.
No BYOK key required. The Brand Knowledge Base uses the platform Gemini 2.0 Flash integration — available to all plan tiers. PDF only, maximum 10 MB. Text-based PDFs only — scanned image PDFs without a text layer will upload successfully but extraction will return no text, and an amber warning will be shown. Up to 3,000 characters of extracted brand context are injected per toolkit generation.
We build in public. All three tools are MIT-licensed, production-ready, and free to fork. The same infrastructure we use to publish our own content — open for the entire GEO community.
CLI for running GEO audits from the terminal. Generates llms.txt from Markdown/MDX directories and scaffolds new Next.js article sites with auto-generated nav.
Astro + Strapi monorepo for article CMS. Full Innotek design system, three HTML post-processors (stat grids, numbered cards, feature cards), zero CSS framework dependencies.
Modular Node.js blog library. ContentAdapter interface supports local files, Strapi, and Contentful. Includes REST API, RSS generation, client-side search, and HTML-to-MDX migration CLI.
Create a free account, run your first GEO audit, connect your API key, and start generating bulk meta tags in under 10 minutes.