This AI Tool Will Make You RANK FAST (Full Walkthrough)

Caleb Ulku 22:23
Transcript
0:00
0:00 Most agencies spend 60 hours or more and thousands of dollars in labor to rank a local business.
0:05 Even with AI helping write content, you're looking at 30 to 45 minutes per page,
0:10 times 30 pages if not more. That's a lot of work per client before the full system is even in place.
0:17 Now I built an AI agent that runs this entire process. We start with entity research, GBP audit,
0:23 gap analysis, content production, schema, images, video generation, YouTube upload,
0:26 and WordPress deployment. All of it, about 90 minutes of runtime under a dollar per page in API
0:33 costs. And it's bring your own key, no markup. I'm going to walk you through the exact five-step
0:38 process my agency has used to rank hundreds of local businesses since 2016. And after each step,
0:44 I'm going to show you this AI agent executing it in real time. Now, this is the rank map for the
0:50 test site we used, a GBP for a plumber in Malden, Massachusetts. Average position 6.18, only 15%
0:57 of the map in the top three. Look at all that orange, sevens, eights, barely cracking the top
1:03 three anywhere except right on top of their address and a few random spots in the outskirts.
1:07 Now, by the end of this video, I'm going to show you what the same rank map looks like
1:11 after we ran this system. Excuse you. But first, let me show you what this agent actually looks
1:18 like when it's running because once you see it you'll understand why most of what agencies charge
1:23 for it is about to become irrelevant this is what it looks like when you're uh first go to it so i'll
1:29 just blur my email address and enter in here okay and when we first log in we have a client overview
1:34 so we'll hit the test site here and this is the main interface across the top is all the things
1:39 that we want to do we start with settings categories crawl research and on and on so step
1:43 Step one is understanding what Google currently thinks your clients' business is.
1:48 Now since 2018, Google doesn't match keywords.
1:50 It matches entities.
1:52 Your GBP is an entity.
1:54 Your website is an entity.
1:55 Every service you list, every category on your GBP is an entity.
2:00 Your geography is an entity.
2:02 Google is constantly checking whether all of these entities match.
2:05 The closer they match, the better you're going to rank.
2:07 The more gaps between them and the less Google trusts you.
2:11 So the first thing we do with any new client is entity research.
2:15 What categories does the GBP have?
2:17 What's the primary?
2:18 What are the secondaries?
2:19 What are the competitors using that this client isn't?
2:22 Because every missing category is a search where Google is going to show your competitor
2:27 instead.
2:28 Here's how this tool works with that.
2:29 We'll come on to GBP categories and we'll type in plumber Malden and we'll run the category
2:34 audit.
2:35 Now, what the tool is going to do is literally go and search for the businesses around Malden,
2:40 Massachusetts.
2:41 And it's going to say 68 plumber category.
2:43 It's not 68 times.
2:45 So that means there are 68 GBPs in Malden, Massachusetts with plumber as their category.
2:52 Now, those plumbers, they used electrician 17 times, heating contractor 14, bathroom remodeler 8, etc, etc.
2:57 So all we have to do is go down this list and see if any of these categories make sense for this client, for this business, for what they offer.
3:06 So now we know what categories we want to associate with this business.
3:09 But knowing those entities means nothing if the website doesn't back them up.
3:14 So that's step two.
3:16 And this is where most local businesses are completely exposed.
3:20 The GBP might have four categories and 30 services,
3:25 but the website has a homepage, an about page, a contact page,
3:29 and maybe a service page that lists a handful of those 30.
3:33 It ends up being five or six pages trying to support 30 or 35 entities.
3:37 Google sees this mismatch and it's not just Google anymore.
3:41 It's Google's own AI, it's ChatGPT, perplexity, it's Claude.
3:46 They're all evaluating the same entity signals.
3:49 When you fix this for Google, you're simultaneously becoming visible to every A model that's starting to replace traditional search.
3:56 If your client's website has fewer pages than their GBP has services, then you know what the problem is.
4:02 So this is why at my agency, we develop the Core 30.
4:05 That means one page for every category and service that's on the Google business profile.
4:10 The homepage targets the primary category plus city.
4:13 Each service gets its own page and the internal linking follows the same hierarchy that we have on the GBP.
4:19 Homepage to the secondary categories and the secondary category down to the service pages under that secondary category.
4:25 You basically build a website that is an exact mirror image of your Google business profile.
4:30 But you can't build that structure if you don't know what's missing.
4:32 So that's where we're going to come in and do a gap analysis.
4:36 Most agencies skip this entirely, and it's a big part of the reason their clients are stuck for months.
4:42 So to do the gap analysis, we start by pasting in categories and services.
4:46 So we're just going to literally copy paste in any format you choose and hit parse,
4:51 and the tool will parse out the categories and services on your GBP.
4:55 Then we're going to come over to this crawl tab and we'll hit crawl.
4:59 And once it finishes crawling, it will give you all the pages on the website, including what the AI tool thinks is the correct service or category for that page.
5:09 So, for example, here we could say that page is for Plumber.
5:12 And if the tool is decided, we're going to say auto on it.
5:15 So we can select manually.
5:17 We can let the tool decide.
5:19 But the point is we go through and make sure this is correct.
5:22 And then we come on over to research.
5:24 And what this is going to tell us is that we have pages for all of these services and categories but we missing pages for these So we want to import those to the bulk content generator get them on the queue so that we can generate content for them later Okay so that gives us the gap analysis So we now
5:41 know what the core 30 looks like for this client. But for a lot of markets, the core 30 alone isn't
5:46 enough to dominate. We need to go further. We need to build more relevance. And this is where most
5:51 agencies have no idea what to do next. So their suggestion is weekly blog posts, which is a waste
5:56 of time, do not do weekly blog posts. There are two additional types of relevance that we need to
6:01 build. Topical relevance and geographical relevance. Remember, Google matches entities.
6:07 Topical relevance connects your business entity with your service entity. It proves you actually
6:11 do what you say you do. Geographical relevance connects your business and service entities with
6:16 a specific location entity. It proves you operate where you say you operate. You need both. Most
6:22 agencies build neither but we need to start with topical relevance so we search for the target
6:27 service and poll people also ask questions we have competitor headlines that are already ranking
6:32 we find reddit threads real people asking real questions about that service in that location
6:37 and all of this feeds into additional content topics beyond the core 30 pages that answer the
6:43 exact questions your clients customers are already typing into google and typing into chat gpt doing
6:49 Doing this manually, even with AI helping, can take several hours per client.
6:53 Let me show you what the tool does.
6:55 So we're going to come on over to Supporting Planner, and I'm just going to pull up the past run so that you don't have to sit and watch it run.
7:01 It takes a few minutes.
7:02 You'll type in the service term and hit research.
7:04 And then the tool is going to scrape the people also ask questions.
7:08 It's going to scrape Reddit and find these.
7:10 It's going to scrape other local forums.
7:12 And you'll go through this list and think, are these actually relevant to the search term plumber for people who are looking for a plumber?
7:19 So how much should a plumber charge per hour in 2026?
7:22 And if you don't like it exactly, we can click on here and we can maybe change it to how much should a plumber charge per job in 2026 and save.
7:30 And now that becomes the term.
7:32 The scoring is based on the AI tool scoring it.
7:34 How relevant does the tool think that particular query is for the service term?
7:40 But obviously, I probably wouldn't do something like DIY plumbing versus hiring a plumber.
7:46 but any ones that you like we just tick the boxes and we hit send to bulk queue. Now geographical
7:53 relevance. I want to show you how we used to do this manually so the concept is very clear before
7:58 you see what the tool does. So we're going to start by looking at this local rank map. So if you
8:03 haven't seen one of these before this is telling you if you were standing in the places where the
8:07 dots are and you searched for window cleaning Las Vegas this is the position that this particular
8:13 business would be located in, right? So here, a lot of one, twos, and threes, and then we very
8:17 quickly fall off. So now, if I need to build geographical relevance, I need to convince Google
8:22 that, hey, right where this four is, I need to convince Google that we actually clean windows
8:26 in that location. So I'm going to zoom in, and I'm going to be looking for landmarks that are on
8:32 Google Maps. So this is a Google Maps-based interface. This is the tool LeadSnap. And with
8:36 this, anything that's on this map, we know Google recognizes because this is Google. So Highland
8:42 Falls Golf Club. We would write an article that says window cleaning near the Highland Falls
8:47 Golf Club in Las Vegas. Window cleaning near Timberline in Las Vegas. Window cleaning near
8:54 the Police Memorial Park near Las Vegas. We're basically looking for these hyper-local signals
8:59 to convince Google that we do window cleaning in this area, and all of those articles would be
9:03 based on trying to turn this 4 into a 3. Because I would much rather turn a 4 into a 3 than turn a
9:10 10 into a nine. Nine is still losing, not getting any traffic. Now let me show you what the tool does.
9:15 So I'm going to come on over to Geo Planner. You basically start by uploading your local rank map
9:19 from LeadSnap, and this is designed to work with LeadSnap. If you've not used LeadSnap,
9:25 it's a very good tool. We use it for GVP management. There's a link in the description.
9:29 If you want to try it, you'll get half off your first three months. It's an affiliate link,
9:33 so if you don't want to use it, no problem, but it will give you that discount. So anyway,
9:36 you'll upload your local rank map and you put in your target service and the city and then you'll
9:41 hit generate places and what the tool is going to do is analyze all of the neighborhoods on it these
9:46 directions are how far away and in what direction these places are from your gbp so medford mystic
9:53 avenue corridor is 3.3 miles east of the gbp address and here's the reason why the tool thinks
9:59 that this is a nine for ten rank six single well optimized location page targeting landmarks could
10:04 push into the top three, competitive density, blah, blah, blah. So it's giving you the reasoning
10:08 why it wants to target this area, why it wants to target this area. Once we decide whether we agree
10:14 or not, we scroll down here and here's all the content around that. So we have the Medford City
10:20 Hall, we have Hormel Stadium. So this is a yoga studio. We don't really want to write stuff about
10:25 the yoga studio, so we'll untick that. Just the general mystic advent coordinator, the Lego
10:31 it's discovery center so you can go through all of these different landmarks decide which ones we
10:35 like and send to the bulk content generator and it's important to note that all of these landmarks
10:41 are being directly pulled from the google places api this these are coming from google's own
10:46 database this is not guesswork so now we have every page that needs to exist we have the core 30 the
10:53 topical relevance pages and the geographical relevance pages but knowing what pages to build
10:57 is half the problem.
10:59 Because if the content on those pages
11:00 reads like everything else on the internet,
11:02 Google has no reason to rank it.
11:04 So how do you fill all these pages without sounding like a robot Two things kill local content First it reads like every other plumbing page on the internet Then Google basically has no reason to rank it It
11:18 actually has no reason to even index it. And indexing generic AI content is becoming more
11:23 and more difficult. The content needs to be distinct. It needs to add something that isn't
11:27 already out there. Secondly, it needs to be local. Not we serve the Malden area local, but actually
11:32 local. A plumber in Malden should be talking about the triple-deckers in Medford, the old
11:37 pipe infrastructure in Somerville, the coastal weather coming off of Revere Beach. That's what
11:42 makes Google and the AI systems believe you're genuinely local, even if you've never set foot
11:47 in that city before. So before the agent writes a single word, it does multiple rounds of research.
11:53 First, it goes out and finds what real people are actually asking about each service. Reddit,
11:59 it, local forums, competitor sites, people also ask results. Second, it researches the local area
12:04 itself, neighborhoods, landmarks, local conditions, anything that makes the content sound like it was
12:11 written by someone who operates in that market. But research alone isn't going to fix the biggest
12:17 problem with AI content, and that's because of how AI actually writes. Let me show you what the AI
12:23 agent research looks like, and then I'm going to break down the writing system that makes this
12:27 content pass, get indexed, and rank while everything else fails. So here's the bulk queue,
12:32 and you can see it's in the process of running right now. It's generating some outlines for
12:36 different articles. And if I just go and open up this one, which has research done, it's going to
12:41 give me the brief, everything that it found, the PAA, the competitor analysis, featured snippets,
12:47 local context, all of this information it researched for this article before it ever
12:52 started even writing. Then it wrote an outline before it finally started producing content.
12:58 And even once all that research is done, writing is where a lot of AI content tools basically fall
13:04 apart. You have a prompt, you give it to AI, and it spits out an article in one shot. One prompt,
13:10 one output. And it reads like AI wrote it because it did. One model, one pass, one consistent tone
13:16 the whole way through. That is not how humans write. Humans write section by section. They
13:22 take breaks. They come back. They revise. The tone shifts slightly between sections. Sentence links
13:27 vary. That's what makes writing feel human. Boy, you look like you are really agreeing with me here,
13:33 huh? You just keep looking up at me. Let me show you how this tool writes. The agent uses an eight
13:38 pass pipeline for writing. And that engine is different for each of the four main types of
13:44 content. So we have service pages, category pages, location pages, and supporting content.
13:48 And the writing prompts the system uses are different for each one of those.
13:52 Let me just show you service pages as an example.
13:55 So I'll open it up.
13:56 So the first thing we have is overall content guardrails.
13:59 So this is a shorter prompt that is put on top of every other writing prompts
14:04 to make sure that the AI doesn't run off and do something completely different.
14:09 So this is just reminding it every step of the way what it's writing, why it's writing that.
14:14 So we talked about PAS1 is the research synthesis.
14:17 All of that topical and local research that we just ran, it's raw, Reddit threads, people also ask results, competitor angles, local landmarks.
14:25 This first pass takes all of that and compresses it into a structured content brief that the AI will then use in all of the future writing.
14:32 What questions need answering? What local details to weave in? What angles are the competitors covering that this page needs to cover, but better?
14:39 This is the pass that turns a pile of research into a writing plan.
14:43 plan. Then, pass two, this is our strategic outline. The system takes that brief and builds
14:48 the full page architecture. Every H2 heading, the angle for each section, the flow from top
14:53 to bottom. This isn't just a list of headings, it's mapping out what each section needs to
14:57 accomplish and how they connect to each other. Most AI tools skip this part entirely and just
15:04 start writing. That's why their content feels like it wanders. Pass three is the section draft,
15:09 And this is where it gets really interesting. The agent is going to write each H2 section
15:13 with an independent call, a separate prompt for each section with a separate call to the API.
15:19 That results in a slightly different tone and angle. This is the foundation. It mimics how a
15:24 real writer would draft an article section by section over the course of a day. You come back
15:28 and each section feels fresh. Your energy is different. Your word choice shifts. The natural
15:34 variation is what makes content feel like a person wrote it instead of like a machine wrote it.
15:40 Then we're going to go to pass four, burstiness. This is one that most people have never heard of.
15:45 AI typically writes in a very consistent rhythm. Sentences tend to be about the same length.
15:50 Paragraphs follow the same cadence. That's not how humans write. Humans write a long sentence,
15:55 then a short one, then two medium ones, then a fragment. Pass four goes through the entire draft
16:00 and breaks up that robotic rhythm.
16:02 It varies sentence lengths, mixes in short, punchy lines,
16:06 adds the kind of irregular pacing that makes content feel alive instead of generated.
16:11 Past five, now we're going to inject perplexity.
16:14 The system finds predictable AI word patterns and replaces them.
16:18 If something says significant improvements,
16:20 it rewrites that to something a human would actually say.
16:24 Same meaning, but a completely different word choice.
16:27 AI has favorite words.
16:29 robust, leverage, streamline. Humans don't talk like that. This is the pass that kills that AI sniff test. And yes, it does remove the em dashes.
16:40 Now pass six is what we call human bookends The system writes the first two and the last two sentences of the article with extremely conversational opinionated language Those sentences are the ones that Google algorithm
16:52 weighs the heaviest and all of the other AI systems. They're the ones searchers, in fact,
16:58 read first. They'll typically read the first couple of sentences, then scroll to the bottom
17:01 and read the last. Getting those right matters more than the rest of the article combined,
17:06 so we have a pass focused specifically on that. Pass seven is focused on conversion,
17:11 because what's the point of ranking a page if it doesn't make the phone ring?
17:14 So this pass goes through the content and naturally injects calls to action,
17:18 phone numbers, and the conversational language.
17:20 Not in a spammy way, not in a banner ad draft in the middle of a paragraph.
17:24 And each type of content, this conversion pass is going to look and feel very different
17:28 because it's customized for that type of content.
17:31 It's basically doing the kind of lines that make a homeowner stop reading and actually pick up the phone.
17:37 Most AI content tools can produce information articles.
17:40 this pass turns them into pages that will generate leads. And pass 8 is the final check.
17:45 We've had a lot of different calls writing this article, so pass 8 just re-evaluates the entire
17:51 article to make sure it's correct, it's cohesive, it follows the original guidelines, the original
17:56 content brief, and the original outline. Did it do everything that it was supposed to do? Is the
18:01 word count where it should be? Are there any leftover AI patterns that slip through the earlier
18:05 passes, anything that doesn't meet the standard gets flagged and rewritten. This is the quality
18:10 control layer that catches what the other seven passes may have missed. And while those eight
18:16 passes are running, the agent is simultaneously running these parallel steps. We're generating an
18:22 FAQ section. We're generating the meta title description and H1 tag. We're generating schema
18:27 markup. We're generating images with specific image generation prompts. And we're inserting
18:32 highly relevant external links to authority sources, all in parallel. So by the time the
18:37 content is finished, the page is fully built. Then the tool will go in and actually render a video
18:43 script, generate the video, generates a YouTube title tag and description, uploads the video to
18:49 YouTube, and publishes the whole thing to WordPress with the video embedded on the content. Fully
18:54 deployed, no human needed. That's the full pipeline, but you probably want to see what it actually
19:00 looks like. So let me show you. Here is what a completed run looks like. We have 29 articles
19:05 completed, zero failed, 53,000 words of content with an average AI detection score of 39%.
19:11 We can download all of them as a DocX. This also will publish to WordPress, but we did this run to
19:16 DocX. We can download each individual one. We can download the images. And if we come down,
19:22 we can actually see what happens. We had a couple of failed videos, but we had some videos that were
19:26 ready and fully generated and we can see the ai scores so this means my editor can now come in here
19:32 and know that this 2000 word article scored 10 so it's probably fine but this article scored 65
19:38 so it's probably going to need another pass to get that ai score lower but overall like none of this
19:44 content has ever been touched by a human this is straight out of the tool let me show you what one
19:48 of the wordpress pages actually looks like that's been finished so this is a finished page we have
19:53 the hero image h1 we come through here we have a table of contents and remember no human touched
20:00 this this was fully generated built by ai including putting it on wordpress that's kind of funny for
20:06 septic service and here's the youtube video i'll go ahead and mute it and we'll bump up the playback
20:12 speed so you can see but this is the video that the tool created uploaded to youtube and embedded
20:18 we know that these videos improve indexing on google they improve ranking they improve authority
20:24 so this is definitely something that we do regularly with a lot of our content for clients
20:29 so remember that rank map i talked about here's the after average position of 3.22 74 green
20:35 coverage across the entire greater boston area malden medford somerville cambridge revere all
20:40 the way out to lynn and saugus and the spots that aren't green the ranked fourth and fifth one more
20:45 round of geographical relevance pages and those are going to be green too. This isn't six months
20:49 of blogging. It's not hundreds of backlinks. That's this system executed by this agent in a single
20:55 session. And this isn't a demo I put together for this video. This is a real tool. Let me show you
21:01 the actual usage dashboard. We have over 1100 pages generated by real users, 114 agency owners
21:09 or business owners running this tool.
21:12 So remember, 1162 pages.
21:14 And if I look at token usage,
21:16 over the lifetime of the tool, we've spent $700.
21:19 That's well under a dollar per page in agency costs.
21:22 At my agency, my cost per article
21:24 is under half what it was before.
21:26 And the quality has actually improved.
21:29 And what we see happening,
21:30 most of our labor is now in double checking,
21:33 reviewing the content that the tool is producing
21:35 instead of actually copy paste prompting.
21:38 So if you want access to this tool
21:39 Every single prompt behind it and a weekly live zoom with me where I can help you implement all
21:44 of this for your clients or for your business. It's all inside my AI SEO Pro community. I'll put
21:49 a link in the description. There's no extra charge for access to this tool and it's bring your own
21:53 API key so I don't make anything on usage. But here's the thing. None of this matters if you
21:58 can't get clients in the door. I want to make sure that you understand the full A to Z system
22:03 for how to do local SEO.
22:06 So I'm going to link you to this video here,
22:08 which is over 40 minutes long,
22:10 and I break down every little thing
22:12 that my agency does to rank local businesses.
22:15 The video was produced before this tool,
22:17 but it helps you understand the foundation
22:19 of what this tool is doing
22:21 before you come in and start using it.

Caleb Ulku demonstrates an AI agent he built that automates the full local SEO workflow — from GBP category auditing and gap analysis to content generation, schema markup, video creation, YouTube upload, and WordPress deployment — for under $1 per page in API costs. The system is built on a 'Core 30' framework where every GBP service and category gets a dedicated website page, supplemented by topical relevance content (scraped from People Also Ask, Reddit, forums) and geo-targeted location pages using Google Places API data. The content pipeline uses an 8-pass writing system designed to mimic human writing patterns, including burstiness variation, perplexity injection to remove AI word patterns, and human-sounding bookend sentences. A real-world test case shows a Malden, MA plumber improving from average position 6.18 with 15% top-3 map coverage to position 3.22 with 74% green coverage across the greater Boston area after one session with the tool.

AI-Powered Local SEO Automation Entity-Based SEO Strategy Content Quality and Anti-AI-Detection Writing Topical and Geographical Relevance Building Agency Efficiency and Cost Reduction
  • Build a 'Core 30' site structure where every GBP service and category has its own dedicated page, mirroring the exact hierarchy of the Google Business Profile — this entity alignment is what Google (and AI search engines) actually evaluate.
  • Use an 8-pass AI writing pipeline to avoid detectable AI content: write sections independently (separate API calls), vary sentence length for burstiness, replace predictable AI vocabulary, and manually craft the first and last two sentences with conversational, opinionated language.
  • Target geo-relevance by pulling real landmarks from the Google Places API and writing hyper-local pages (e.g., 'plumber near Highland Falls Golf Club') to convert rank map positions from 4s to 3s rather than wasting effort on positions already in the double digits.
  • Skip weekly blog posts — instead, prioritize topical relevance pages built from real search data (PAA, Reddit threads, competitor headlines) and geographic relevance pages built from local landmarks near underperforming rank map positions.
  • The tool is available inside Caleb's 'AI SEO Pro' community with bring-your-own-API-key pricing, meaning no markup on usage costs — real-world total spend across 1,162 pages was under $700.
Concepts 15
Entity-Based SEO
1 videos Core

Google's approach since 2018 of matching entities (businesses, services, geographies) rather than keywords, where ranking depends on how well all related entities align and reinforce each other.

View concept page →
Core 30
5 videos Core

A local SEO website architecture strategy consisting of approximately 30 pages built from 3-4 GBP categories and 20-25 services, structured so the website exactly mirrors the Google Business Profile to signal trust and relevance to Google's algorithm.

View concept page →
Eight-Pass Writing Pipeline
1 videos Core

An AI content generation system that writes articles through eight sequential passes—research synthesis, strategic outline, section drafting, burstiness, perplexity injection, human bookends, conversion optimization, and final quality check—to produce human-sounding content.

View concept page →
AI SEO Agent
1 videos Core

An automated AI-powered system that executes the full local SEO content workflow—entity research, GBP audit, gap analysis, content production, schema, images, video generation, YouTube upload, and WordPress deployment—in approximately 90 minutes at under $1 per page.

View concept page →
Topical Relevance
4 videos Core

The practice of building additional content that connects a business entity with its service entity by answering real questions people ask about that service, proving to Google the business genuinely performs those services.

View concept page →
Geographical Relevance
1 videos Core

The practice of creating hyper-local content referencing specific landmarks and neighborhoods to prove to Google that a business operates in specific geographic areas, improving local rank map positions.

View concept page →
Local Rank Map
5 videos Core

A geographic visualization tool (also called a 'local SEO heat map') that shows exactly where a business ranks for a given keyword across different locations, helping identify gaps and guide optimization efforts.

View concept page →
GBP Category Audit
1 videos Core

An analysis of competitor Google Business Profile categories in a target market to identify categories a client is missing that competitors are using, which represent lost ranking opportunities.

View concept page →
Gap Analysis
1 videos Core

A process of crawling a client's existing website and comparing its pages against all GBP categories and services to identify which service pages are missing and need to be created.

View concept page →
Burstiness
1 videos Supporting

A writing technique that varies sentence length and paragraph cadence to mimic human writing patterns, breaking the uniform rhythm that characterizes AI-generated content.

View concept page →
Perplexity Injection
1 videos Supporting

A content editing pass that identifies and replaces predictable AI vocabulary patterns and clichéd phrases with more natural, human-sounding language.

View concept page →
Section-by-Section Drafting
1 videos Supporting

An AI writing approach where each H2 section of an article is generated with a separate, independent API call, producing natural tonal variation that mimics how human writers work across a session.

View concept page →
Caleb Ulku
34 videos Supporting

The primary guest and SEO expert featured in the video, founder of an AI SEO agency that developed the Core 30 local SEO methodology and scaled to 97 plumber clients using AI-driven content and local link-building strategies.

View concept page →
Human Bookends
1 videos Supporting

A writing pass that crafts the first two and last two sentences of an article with highly conversational, opinionated language, targeting the portions Google and readers weight most heavily.

View concept page →
LeadSnap
1 videos Supporting

A third-party GBP management and local rank tracking tool that provides the rank map interface used to visualize local search positions and identify geographic targeting opportunities.

View concept page →
Q&A 20
How long does it typically take agencies to rank a local business, and how much does it cost in labor?

Most agencies spend 60 hours or more and thousands of dollars in labor to rank a local business. Even with AI helping write content, you're looking at 30 to 45 minutes per page, times 30 pages or more. That's a significant amount of work per client before the full system is even in place.

What does the AI agent for local SEO do, and how fast does it run?

The AI agent runs the entire local SEO process including entity research, Google Business Profile (GBP) audit, gap analysis, content production, schema markup, image generation, video generation, YouTube upload, and WordPress deployment. The full process runs in about 90 minutes of runtime at under a dollar per page in API costs. It uses a bring-your-own-key model with no markup.

Why does Google rank businesses based on entities rather than keywords, and what does that mean for local SEO?

Since 2018, Google doesn't match keywords — it matches entities. Your Google Business Profile is an entity, your website is an entity, every service you list, every category on your GBP is an entity, and your geography is an entity. Google constantly checks whether all of these entities match each other. The closer they match, the better you rank. The more gaps between them, the less Google trusts you. This means local SEO is fundamentally about aligning all your entity signals rather than stuffing keywords.

What is a GBP category audit and why is it important for local SEO?

A GBP category audit involves researching what categories your Google Business Profile has (primary and secondary), and comparing them to what competitors are using that you aren't. Every missing category represents a search where Google will show your competitor instead of your business. For example, for a plumber in Malden, Massachusetts, the tool found 68 GBPs with 'plumber' as a category, and those businesses also used categories like electrician (17 times), heating contractor (14 times), and bathroom remodeler (8 times). The audit helps you identify which additional categories make sense to add to your GBP.

What is the 'Core 30' strategy in local SEO and how should a website be structured around it?

The Core 30 means creating one page for every category and service listed on your Google Business Profile — typically around 30 pages. The structure works as follows: the homepage targets the primary category plus city; each service gets its own dedicated page; internal linking follows the same hierarchy as the GBP (homepage links to secondary categories, and secondary category pages link down to the service pages under that category). The goal is to build a website that is an exact mirror image of your Google Business Profile, ensuring Google sees consistent entity signals across both platforms.

How do you perform a gap analysis to find missing pages on a local business website?

A gap analysis compares the services and categories on a Google Business Profile against the pages that actually exist on the website. The process involves: (1) pasting in the GBP categories and services and parsing them with the tool; (2) crawling the website to identify all existing pages and mapping each page to the correct service or category; (3) running a research comparison that identifies which services and categories have no corresponding page on the website. Those missing pages are then added to a bulk content generation queue. Most agencies skip this step entirely, which is a major reason their clients stay stuck for months.

What are the two types of relevance needed beyond the Core 30 pages, and why are they important?

Beyond the Core 30 pages, you need to build topical relevance and geographical relevance. Topical relevance connects your business entity with your service entity — it proves you actually do what you say you do. It's built by creating content that answers real questions people are asking about your service (from sources like People Also Ask, Reddit, competitor headlines, and local forums). Geographical relevance connects your business and service entities with a specific location entity — it proves you operate where you say you operate. It's built by creating location-specific pages targeting landmarks and neighborhoods in your service area. Both are essential; most agencies build neither. Weekly blog posts are not an effective substitute.

How does geographical relevance content work for local SEO, and how do you identify what to write about?

Geographical relevance content convinces Google that your business actually operates in specific locations within your service area. The process involves looking at your local rank map (using a tool like LeadSnap) and identifying areas where you're ranking 4th, 5th, or lower. For those weak spots, you zoom into the map and find real landmarks that appear on Google Maps — things like golf clubs, parks, stadiums, or community centers. You then write articles like 'Window Cleaning near Highland Falls Golf Club in Las Vegas.' The landmarks are pulled directly from the Google Places API, so Google already recognizes them. The goal is to turn a rank-4 position into a rank-3 rather than trying to move a rank-10 to rank-9, since rank-9 still gets no traffic.

What makes AI-generated local SEO content fail to rank, and how can you fix it?

Two main things kill local content: (1) It reads like every other page on the internet in that niche — when content is generic, Google has no reason to rank it and increasingly won't even index it. (2) It isn't genuinely local. Saying 'we serve the Malden area' isn't enough — content needs to reference specific local details like the triple-deckers in Medford, old pipe infrastructure in Somerville, or coastal weather coming off Revere Beach. The fix involves doing thorough research before writing (Reddit, local forums, People Also Ask, competitor sites, local landmarks), and using a multi-pass writing pipeline that mimics how humans actually write rather than generating everything in a single AI prompt.

What is the 8-pass writing pipeline used in the AI SEO agent, and what does each pass do?

The 8-pass writing pipeline works as follows: (1) Research Synthesis — compresses raw research (Reddit threads, People Also Ask, competitor angles, local landmarks) into a structured content brief; (2) Strategic Outline — builds the full page architecture with every H2 heading, section angles, and how sections connect; (3) Section Draft — writes each H2 section with a separate independent API call, creating natural variation in tone; (4) Burstiness — breaks up robotic rhythm by varying sentence lengths and mixing in short punchy lines and fragments; (5) Perplexity Injection — finds and replaces predictable AI word patterns (like 'robust,' 'leverage,' 'streamline') with language humans actually use; (6) Human Bookends — rewrites the first two and last two sentences with extremely conversational, opinionated language since those are weighted most heavily by Google; (7) Conversion Pass — naturally injects calls to action, phone numbers, and conversational language tailored to the content type; (8) Final Check — re-evaluates the entire article for correctness, cohesion, adherence to the original brief, word count, and any remaining AI patterns.

Why does writing each content section with a separate API call produce better results than a single-prompt approach?

Writing each H2 section with a separate independent API call results in slightly different tone and angle for each section, which mimics how a real human writer drafts an article section by section over the course of a day. When a human writes, they take breaks, come back, and their energy and word choice shifts naturally between sections. This natural variation is what makes content feel like a person wrote it instead of a machine. Most AI content tools use a single prompt for one output, which produces a consistent robotic tone throughout — a telltale sign of AI-generated content.

What is 'burstiness' in AI content writing and why does it matter?

Burstiness refers to the natural variation in sentence length and paragraph cadence that characterizes human writing. AI typically writes in a very consistent rhythm where sentences tend to be about the same length and paragraphs follow the same cadence. Humans, by contrast, write a long sentence, then a short one, then two medium ones, then a fragment. The burstiness pass in the writing pipeline goes through the entire draft and breaks up the robotic rhythm by varying sentence lengths, mixing in short punchy lines, and adding irregular pacing that makes content feel alive instead of generated.

What happens in parallel while the AI agent is writing content for a local SEO page?

While the 8 writing passes are running, the agent simultaneously executes several parallel steps: generating an FAQ section, generating the meta title, meta description, and H1 tag, generating schema markup, generating images with specific image generation prompts, and inserting highly relevant external links to authority sources. After the content is finished, the tool also renders a video script, generates the video, creates a YouTube title tag and description, uploads the video to YouTube, and publishes the entire page to WordPress with the video embedded. The result is a fully deployed page with no human intervention needed.

What results did the AI agent achieve for the plumber test site in Malden, Massachusetts?

Before running the system, the plumber's GBP had an average rank map position of 6.18 with only 15% of the map in the top three, showing mostly orange (positions 7-8) across the greater Boston area. After running the system, the average position improved to 3.22 with 74% green coverage across the entire greater Boston area, including Malden, Medford, Somerville, Cambridge, Revere, Lynn, and Saugus. The remaining non-green spots ranked 4th or 5th, and one more round of geographical relevance pages would likely turn those green too. This was achieved in a single session, not through months of blogging or hundreds of backlinks.

How cost-effective is the AI SEO agent compared to traditional agency methods?

The AI agent runs at under a dollar per page in API costs, using a bring-your-own-key model with no markup. Over the lifetime of the tool with 1,162 pages generated by real users, total token spend was approximately $700 — well under a dollar per page. The agency using this tool has seen their cost per article drop to under half of what it was before, while quality has actually improved. Most labor time has shifted from copy-paste prompting to double-checking and reviewing the content the tool produces.

Why are the first and last sentences of a web page the most important for SEO and conversion?

The first two and last two sentences of an article are the ones that Google's algorithm and other AI systems weigh the most heavily. They're also the sentences that searchers actually read — people typically read the first couple of sentences, then scroll to the bottom and read the last ones. Because of this disproportionate weight, the writing pipeline includes a dedicated pass (Pass 6 - Human Bookends) that writes these specific sentences with extremely conversational and opinionated language. Getting those right matters more than the rest of the article combined.

How does fixing entity alignment for Google also improve visibility in AI-powered search tools like ChatGPT and Perplexity?

When you fix entity alignment for Google — ensuring your GBP categories, website pages, and content all match and reinforce each other — you simultaneously become more visible to every AI model that is starting to replace traditional search. Google's own AI, ChatGPT, Perplexity, and Claude are all evaluating the same entity signals. So the work done to fix entity gaps for Google's ranking algorithm also improves your visibility in AI-powered search tools. This makes the investment in proper entity-based SEO doubly valuable as search behavior shifts.

What AI detection scores did the bulk content generation produce, and how should agencies use those scores?

In a completed run of 29 articles (53,000 words), the content achieved an average AI detection score of 39%. Individual articles varied — some scored as low as 10 (likely fine to publish as-is) while others scored as high as 65 (likely needing another editing pass to lower the AI score). The AI detection scores allow editors to prioritize their review time: low-scoring articles can go straight to publication while higher-scoring ones get a human editing pass. None of the content in the demo run had been touched by a human before scoring.

What is the recommended internal linking structure for a local business website to maximize entity alignment?

The internal linking structure should mirror the hierarchy of your Google Business Profile. The homepage should link to secondary category pages. Each secondary category page should then link down to the individual service pages that fall under that category. This creates a clear entity hierarchy that Google can follow, reinforcing the relationship between your primary category, secondary categories, and individual services — exactly matching how they're organized on your GBP. This structured approach is far more effective than random internal linking.

Why are weekly blog posts ineffective for local SEO, and what should agencies do instead?

Weekly blog posts are generally a waste of time for local SEO because they don't address the core problem — entity alignment gaps. Instead of random blog content, agencies should focus on two specific types of relevance-building content: topical relevance pages (answering real questions people ask about your services, sourced from People Also Ask, Reddit, competitor content, and local forums) and geographical relevance pages (hyper-local content targeting specific landmarks and neighborhoods in your service area to prove you operate there). These targeted content types directly address Google's entity matching requirements, while generic blog posts do not.