<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Thomas Unise</title>
	<atom:link href="https://thomasunise.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://thomasunise.com/</link>
	<description>Fractional CMO For Hire</description>
	<lastBuildDate>Tue, 07 Apr 2026 17:36:01 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>GLM-5.1 Scores 94.6% of Claude Opus on Coding at a Fraction the Cost</title>
		<link>https://thomasunise.com/glm-5-1-scores-94-6-of-claude-opus-on-coding-at-a-fraction-the-cost/</link>
					<comments>https://thomasunise.com/glm-5-1-scores-94-6-of-claude-opus-on-coding-at-a-fraction-the-cost/#respond</comments>
		
		<dc:creator><![CDATA[Thomas Unise]]></dc:creator>
		<pubDate>Tue, 07 Apr 2026 17:17:18 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[claude alternative]]></category>
		<category><![CDATA[glm]]></category>
		<category><![CDATA[glm 5.1]]></category>
		<category><![CDATA[glm vs opus]]></category>
		<guid isPermaLink="false">https://thomasunise.com/?p=1165</guid>

					<description><![CDATA[<p>Click here Try GLM for Yourself  Z.ai&#8217;s latest model is closing the gap with Western frontier AI faster than the industry expected. The hardware story is worse. Z.ai released GLM-5.1...</p>
<p>The post <a href="https://thomasunise.com/glm-5-1-scores-94-6-of-claude-opus-on-coding-at-a-fraction-the-cost/">GLM-5.1 Scores 94.6% of Claude Opus on Coding at a Fraction the Cost</a> appeared first on <a href="https://thomasunise.com">Thomas Unise</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><img fetchpriority="high" decoding="async" class="alignnone size-full wp-image-1166" src="https://thomasunise.com/wp-content/uploads/2026/04/GLM-5-1-Towards-Long-Horizon-Tasks-04-07-2026_12_12_PM.png" alt="" width="967" height="494" srcset="https://thomasunise.com/wp-content/uploads/2026/04/GLM-5-1-Towards-Long-Horizon-Tasks-04-07-2026_12_12_PM.png 967w, https://thomasunise.com/wp-content/uploads/2026/04/GLM-5-1-Towards-Long-Horizon-Tasks-04-07-2026_12_12_PM-300x153.png 300w, https://thomasunise.com/wp-content/uploads/2026/04/GLM-5-1-Towards-Long-Horizon-Tasks-04-07-2026_12_12_PM-768x392.png 768w, https://thomasunise.com/wp-content/uploads/2026/04/GLM-5-1-Towards-Long-Horizon-Tasks-04-07-2026_12_12_PM-200x102.png 200w" sizes="(max-width: 967px) 100vw, 967px" /></p>
<h3><a href="https://z.ai/subscribe?ic=DINOYCG2IA">Click here Try GLM for Yourself </a></h3>
<p><strong>Z.ai&#8217;s latest model is closing the gap with Western frontier AI faster than the industry expected. The hardware story is worse.</strong></p>
<p>Z.ai released GLM-5.1 on March 27, 2026, making it available to all GLM Coding Plan subscribers. The announcement landed on X with 1.2 million views and was met, in most Western AI circles, with the particular silence reserved for news people aren&#8217;t sure how to contextualize. That silence is worth examining, because the model deserves a cleaner read than it&#8217;s getting.</p>
<p>Using Claude Code as the evaluation framework, GLM-5.1 scored 45.3 on Z.ai&#8217;s internal coding benchmark. Claude Opus 4.6 scored 47.9. The gap is 2.6 points. In the world of LLM benchmarks, where labs regularly argue about fractions of a percentage point, a 2.6 point gap between an open-weights model from a Chinese lab and the best coding model Anthropic has ever shipped is not a gap. It&#8217;s a statement.</p>
<p>The instinct from the incumbent side will be to poke at the methodology. Z.ai chose Claude Code as the testing environment, which does give Claude a structural familiarity advantage because the tooling was built for it. That&#8217;s a noteworthy detail that Z.ai chose to lean into rather than obscure, suggesting a level of confidence in the result that isn&#8217;t common in benchmark marketing. They ran their model through the competitor&#8217;s house and still nearly won. That&#8217;s the point they&#8217;re making, and it lands.</p>
<h2>The Five-Week Problem</h2>
<p>What the benchmark score doesn&#8217;t capture is the velocity behind it, which is the part that should actually be keeping people up at night.</p>
<p>GLM-5 shipped in February 2026 with a coding evaluation score of 35.4. Five weeks later, GLM-5.1 scored 45.3. That is a 28% improvement with no architectural changes for the same base model, better post-training.</p>
<p>Think about what that means operationally. Z.ai didn&#8217;t go back and redesign the model. They didn&#8217;t train from scratch on new data. They refined their alignment pipeline and the supervised fine-tuning, the reinforcement learning stages, the distillation process — and extracted 28% more coding performance out of the same weights. That is a pure execution problem, and they solved it in a month.</p>
<p>Western labs are not moving at that speed on post-training iteration. They&#8217;re moving fast, but the public release cadence doesn&#8217;t reflect the kind of rapid-fire incremental improvement Z.ai just demonstrated. Whether that&#8217;s a structural difference in how they operate, a difference in organizational pressure, or simply a function of having more recent institutional memory of what it&#8217;s like to be behind. GLM-5.1 exists, and it&#8217;s nearly as good as Opus at coding, and it will be open source.</p>
<h2>The Hardware Story Nobody Has a Good Response To</h2>
<p>GLM-5 and GLM-5.1 were trained entirely on 100,000 Huawei Ascend 910B chips. No Nvidia hardware was used at any stage of training.</p>
<p>This should be treated as a significant geopolitical development. It is mostly being treated as a technical footnote.</p>
<p>The architecture of U.S. AI export policy rests on a theory: restrict access to advanced compute, restrict access to frontier model training, maintain a lead by controlling the inputs. It is a defensible theory. It has also now produced a situation where a Chinese AI lab that IPO&#8217;d on the Hong Kong stock exchange in January 2026 at a $31.3 billion valuation and has trained a frontier-adjacent model on entirely domestic hardware, released it as open weights under MIT license, and is offering API access for ten dollars a month.</p>
<p>The export controls may have slowed this by a year. Maybe two. But the idea that they would prevent it was always optimistic, and the evidence is now public and benchmarked. There is no clean response to this from a policy standpoint. You can&#8217;t un-release a model. You can&#8217;t revoke weights that are already on HuggingFace. The Huawei Ascend ecosystem exists, it runs frontier training workloads, and the next version of every major Chinese model will also be trained on it.</p>
<p>What makes this particularly uncomfortable is that the Ascend 910B is not supposed to be competitive with H100s for training at this scale. The received wisdom was that Chinese labs could perhaps train smaller, more efficient models on domestic hardware but would hit a wall when trying to scale. GLM-5 runs 744 billion parameters with 40 billion active, trained on 28.5 trillion tokens. That is not a small model. That is not hitting a wall. That is operating at a scale that was supposed to be off-limits without Nvidia silicon, and it produced a result that is within three points of GPT-5.2 on SWE-bench.</p>
<p>The policy community needs to update its model. The technical community already has.</p>
<h2>What the Model Actually Does Well</h2>
<p>On SWE-bench Verified, GLM-5.1 scores 77.8%. Claude Opus 4.6 scores 80.8%. GPT-5.2 scores 80.0%. Three points separates GLM-5.1 from the two best closed-source coding models in the world. For individual developers, that gap is invisible in day-to-day work. For enterprises with specific, high-stakes technical requirements, it might matter. But for the median software team shipping product.</p>
<p>On Vending Bench 2, which measures sustained long-horizon planning by requiring a model to operate a simulated vending machine business over a full year, GLM-5 ranks first among open-source models and approaches Claude Opus 4.5. That benchmark doesn&#8217;t get as much press as SWE-bench, but it&#8217;s arguably more relevant to real agentic deployments. Running an agent loop for thirty seconds is easy. Running one for an hour without drifting, losing state, or making cascading bad decisions is the actual problem. GLM-5 is apparently good at this.</p>
<p>GLM-5 also integrates DeepSeek Sparse Attention, which substantially cuts deployment cost while preserving long-context capacity. Z.ai is not building in a vacuum. They are pulling the best ideas from across the open-source ecosystem including from DeepSeek, another Chinese lab that rattled the industry earlier this year and assembling them into a coherent product. That&#8217;s not plagiarism. That&#8217;s how engineering works. The Western labs do it too. The difference is that Z.ai is doing it faster right now.</p>
<h2>Where It Still Falls Short</h2>
<p>GLM-5.1 is a coding and agentic model. It is not pretending to be anything else, which is actually a point in its favor.</p>
<p>Claude Opus 4.6 supports a one million token context window. GLM-5.1 caps at 200K. For any use case that requires sustained context across very long sessions  for deep legal document review, large codebase analysis, long multi-session agentic work and that gap is real and not easily dismissed. 200K is generous for most tasks. It is not enough for all of them.</p>
<p>On complex reasoning and multimodal tasks, Opus retains a meaningful edge. GLM-5.1 was not built to be the best at everything. It was built to be nearly as good as the best at coding for a tenth of the price, and that is what it is.</p>
<h2>The Market Question</h2>
<p>The GLM Coding Plan runs $10 per month standard pricing. Claude Max runs $100. For a developer choosing between 94% of Opus performance at $10 versus 100% at $100, the math is not complicated. Most developers do not need the last 6%. They need reliable, fast, accurate code generation that fits inside a budget. GLM-5.1 answers that question directly.</p>
<p>The integration story is also deliberately frictionless. GLM-5.1 is fully compatible with Claude Code. Switching requires changing an API endpoint. Nothing else. Z.ai built on-ramp specifically calibrated for Claude&#8217;s user base. They are not asking developers to change their workflow. They are asking developers to change one line of configuration. That is a well-designed competitive move.</p>
<p>The weights are coming. Li Zixuan confirmed open-source release with the kind of brevity that suggests it&#8217;s not a question: &#8220;Don&#8217;t panic. GLM-5.1 will be open source.&#8221; Once those weights land, the conversation shifts again. A 94%-of-Opus coding model running locally on enterprise hardware, under MIT license, with no data leaving the building so that is a different value proposition than any cloud API can match. For law firms, medical practices, defense contractors, anyone for whom the data sovereignty pitch is not optional then this becomes a serious evaluation.</p>
<h2>Broader Points</h2>
<p>The Western AI narrative has been running on a particular story for two years. Frontier AI is expensive, closed, and American. Open-source models are impressive but not competitive. Chinese labs are catching up but not there yet.</p>
<p>GLM-5.1, alongside Qwen 3.5 and DeepSeek, represents a class of open-weights models that are now operating at or near frontier-level performance on specific benchmarks. The gap between the best closed-source American models and the best open-weights Chinese models is now measured in single digits on the benchmarks that matter to real engineers. That gap will be zero within the year on at least some dimensions. Probably sooner.</p>
<p>None of this means Anthropic or OpenAI are in trouble next quarter. They have ecosystems, trust, enterprise contracts, and capability advantages that don&#8217;t evaporate because one benchmark moved. But the story that Western labs can maintain a comfortable, durable lead based on compute access and organizational advantage and that story has a shorter shelf life than it did in January.</p>
<p>GLM-5.1 is one data point. The problem is that the data points keep pointing the same direction, arriving faster than expected, and being trained on hardware that wasn&#8217;t supposed to work this well.</p>
<p>At some point, a trend is just the situation.</p>
<h3><a href="https://z.ai/subscribe?ic=DINOYCG2IA">Click here Try GLM for Yourself </a></h3>
<p>The post <a href="https://thomasunise.com/glm-5-1-scores-94-6-of-claude-opus-on-coding-at-a-fraction-the-cost/">GLM-5.1 Scores 94.6% of Claude Opus on Coding at a Fraction the Cost</a> appeared first on <a href="https://thomasunise.com">Thomas Unise</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://thomasunise.com/glm-5-1-scores-94-6-of-claude-opus-on-coding-at-a-fraction-the-cost/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>GLM-5V-Turbo: Z.AI&#8217;s Multimodal Coding Model Is Worth Your Attention</title>
		<link>https://thomasunise.com/glm-5v-turbo-z-ais-multimodal-coding-model-is-worth-your-attention/</link>
					<comments>https://thomasunise.com/glm-5v-turbo-z-ais-multimodal-coding-model-is-worth-your-attention/#respond</comments>
		
		<dc:creator><![CDATA[Thomas Unise]]></dc:creator>
		<pubDate>Thu, 02 Apr 2026 01:24:11 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[glm]]></category>
		<category><![CDATA[glm 5v turbo]]></category>
		<category><![CDATA[z ai]]></category>
		<guid isPermaLink="false">https://thomasunise.com/?p=1153</guid>

					<description><![CDATA[<p>Click here Try GLM for Yourself  Most &#8220;multimodal&#8221; models can look at an image and tell you what&#8217;s in it. Congratulations. So can a golden retriever. GLM-5V-Turbo does something more...</p>
<p>The post <a href="https://thomasunise.com/glm-5v-turbo-z-ais-multimodal-coding-model-is-worth-your-attention/">GLM-5V-Turbo: Z.AI&#8217;s Multimodal Coding Model Is Worth Your Attention</a> appeared first on <a href="https://thomasunise.com">Thomas Unise</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h1><img decoding="async" class="alignnone wp-image-1154 size-full" style="font-size: 16px;" src="https://thomasunise.com/wp-content/uploads/2026/04/glm-5v-turbo.png" alt="" width="2326" height="1608" srcset="https://thomasunise.com/wp-content/uploads/2026/04/glm-5v-turbo.png 2326w, https://thomasunise.com/wp-content/uploads/2026/04/glm-5v-turbo-300x207.png 300w, https://thomasunise.com/wp-content/uploads/2026/04/glm-5v-turbo-1024x708.png 1024w, https://thomasunise.com/wp-content/uploads/2026/04/glm-5v-turbo-768x531.png 768w, https://thomasunise.com/wp-content/uploads/2026/04/glm-5v-turbo-1536x1062.png 1536w, https://thomasunise.com/wp-content/uploads/2026/04/glm-5v-turbo-2048x1416.png 2048w, https://thomasunise.com/wp-content/uploads/2026/04/glm-5v-turbo-200x138.png 200w" sizes="(max-width: 2326px) 100vw, 2326px" /></h1>
<h1></h1>
<h3><a href="https://z.ai/subscribe?ic=DINOYCG2IA">Click here Try GLM for Yourself </a></h3>
<p>Most &#8220;multimodal&#8221; models can look at an image and tell you what&#8217;s in it. Congratulations. So can a golden retriever.</p>
<p>GLM-5V-Turbo does something more interesting. It&#8217;s Z.AI&#8217;s first model built specifically for <strong>vision-based coding tasks</strong> and not just vision, not just coding, but the intersection where most models fall apart.</p>
<h2>What It Is</h2>
<p>GLM-5V-Turbo is a vision-language model with a 200K context window and up to 128K output tokens. It takes images, video, text, and files as input. It outputs text, specifically, code, plans, and structured responses grounded in what it can see.</p>
<p>The positioning is &#8220;multimodal coding foundation model.&#8221; That&#8217;s not marketing fluff. The design choices back it up.</p>
<p>It uses a custom vision encoder called CogViT paired with an MTP (multi-token prediction) architecture optimized for inference speed. The goal was to keep a smaller parameter count without gutting performance. Based on benchmark results, they largely pulled it off.</p>
<h2>The Four Engineering Bets</h2>
<p>Z.AI made specific technical decisions that differentiate this model. They&#8217;re worth understanding.</p>
<p><strong>Native Multimodal Fusion.</strong> Visual and text training weren&#8217;t stitched together post-hoc. The fusion happens from pretraining through post-training. This matters because most VLMs treat vision as a bolt-on. When vision is native, the model reasons across modalities instead of translating between them.</p>
<p><strong>30+ Task Joint Reinforcement Learning.</strong> During RL training, the model was optimized across more than 30 task types simultaneously. STEM, GUI agents, video, grounding, coding  all at once. Joint RL tends to produce more robust generalization than task-specific fine-tuning. The tradeoff is complexity. Z.AI appears to have eaten that complexity so you don&#8217;t have to.</p>
<p><strong>Agentic Data Construction.</strong> Agent data is scarce and hard to verify. Z.AI built a multi-level, verifiable data pipeline and injected what they call &#8220;agentic meta-capabilities&#8221; during pretraining. Translation: the model learned to predict and execute actions early, not as an afterthought.</p>
<p><strong>Expanded Multimodal Toolchain.</strong> The model can use tools that involve visual input, a box drawing, screenshots, webpage reading with image understanding. This extends agent loops beyond pure text into actual visual interaction. Relevant if your workflows involve real interfaces rather than imaginary ones.</p>
<h2>What It&#8217;s Built to Do</h2>
<p><strong>Frontend Recreation from Screenshots.</strong> Send it a design mockup. It reads the layout, color palette, component hierarchy, and interaction logic and then generates a runnable frontend project. For wireframes it reconstructs structure. For high-fidelity designs it aims for pixel-level accuracy.</p>
<p>The demo output in the docs shows it taking a two-screen mobile mockup and generating all four pages, including ones not explicitly shown. That&#8217;s inference, not just imitation.</p>
<p><strong>Autonomous Website Exploration.</strong> Paired with Claude Code or OpenClaw, it can browse a live website, map page transitions, collect visual assets and interaction details, then generate code from exploration, not just from a static screenshot. The docs describe this as upgrading from &#8220;recreating from a screenshot&#8221; to &#8220;recreating through autonomous exploration.&#8221; That&#8217;s a meaningful capability jump.</p>
<p><strong>Code Debugging from Screenshots.</strong> Drop in a screenshot of a broken UI. It identifies rendering issues, layout misalignment, component overlap, color mismatches and generates fix code. Useful for anyone who has spent 45 minutes staring at a CSS bug that a fresh pair of eyes would spot in seconds. The model is the fresh eyes.</p>
<p><strong>GUI Agent Tasks via OpenClaw.</strong> When integrated with OpenClaw, it handles complex real-world tasks combining visual perception, planning, and execution. Webpage layouts, GUI elements, chart data — it processes all of it as part of an action loop.</p>
<h2></h2>
<div style="width: 1080px;" class="wp-video"><video class="wp-video-shortcode" id="video-1153-1" width="1080" height="608" preload="metadata" controls="controls"><source type="video/mp4" src="https://thomasunise.com/wp-content/uploads/2026/04/xiBNKWlIAS-kEhCO.mp4?_=1" /><a href="https://thomasunise.com/wp-content/uploads/2026/04/xiBNKWlIAS-kEhCO.mp4">https://thomasunise.com/wp-content/uploads/2026/04/xiBNKWlIAS-kEhCO.mp4</a></video></div>
<h2></h2>
<h3><a href="https://z.ai/subscribe?ic=DINOYCG2IA">Click here Try GLM for Yourself </a></h3>
<h2></h2>
<h2>The Benchmarks</h2>
<p>On multimodal coding benchmarks, GLM-5V-Turbo posted leading results in design-to-code, visual code generation, and multimodal retrieval. It also performed well on AndroidWorld and WebVoyager, which test agents operating in real GUI environments. Those are harder tests than academic benchmarks. Real GUIs are messy.</p>
<p>On pure-text coding benchmarks, Backend, Frontend, Repo Exploration in CC-Bench-V2 the performance held up. Adding visual capability didn&#8217;t cannibalize text-only coding ability. That&#8217;s not guaranteed. Plenty of multimodal models sacrifice text performance for vision gains. This one didn&#8217;t.</p>
<p>The model also scored well on PinchBench, ClawEval, and ZClawBench, which evaluate agent task execution quality specifically.</p>
<h2>Official Skills</h2>
<p>Z.AI ships a set of pre-built Skills available on ClawHub:</p>
<p><strong>Image Captioning</strong> — identifies objects, relationships between objects, scene context, and actions. Goes beyond label detection into actual scene understanding.</p>
<p><strong>Visual Grounding</strong> — given a natural-language description, locates the corresponding region in an image with bounding box precision. Useful for fine-grained analysis and interactive workflows.</p>
<p><strong>Document-Grounded Writing</strong> — extracts key information from PDFs and Word files, then generates structured text grounded in the document. Report generation, news writing, proposal drafting.</p>
<p><strong>Resume Screening</strong> — reads resumes, compares against job requirements, extracts education, experience, and skills, then ranks candidates. Recruiting automation that actually understands the document rather than keyword matching.</p>
<p><strong>Prompt Generation</strong> — analyzes reference images or videos and generates structured prompts for image and video generation models. Meta-capability: using vision to improve generation inputs.</p>
<h2>Integration</h2>
<p>The API endpoint is standard: <code>POST https://api.z.ai/api/paas/v4/chat/completions</code> with model set to <code>glm-5v-turbo</code>. Supports streaming. Thinking mode is available and toggleable.</p>
<p>SDK support covers Python (<code>zai-sdk</code>), Java (Maven/Gradle), and cURL. The Python SDK install is a single pip command. The API structure mirrors OpenAI-compatible patterns if you&#8217;ve worked with those before.</p>
<p>Thinking mode is enabled by passing <code>"thinking": {"type": "enabled"}</code> in the request body. Worth enabling for complex visual reasoning tasks. Skip it for latency-sensitive pipelines.</p>
<h2>Give It a Try</h2>
<p>GLM-5V-Turbo is a focused model. It&#8217;s not trying to win every benchmark. It&#8217;s built for a specific workflow loop: see a visual environment, plan what to do, execute code. That loop is increasingly relevant as agentic systems move into real interfaces rather than sandboxed APIs.</p>
<p>The smaller model size with competitive benchmark performance is the interesting bet. If it holds up under real workloads, it becomes a cost-efficient option for vision-heavy coding pipelines.</p>
<p>Worth evaluating if your stack involves frontend generation, GUI automation, or any workflow where the input is a screenshot and the output needs to be code.</p>
<p>&nbsp;</p>
<h3><a href="https://z.ai/subscribe?ic=DINOYCG2IA">Click here Try GLM for Yourself </a></h3>
<p>The post <a href="https://thomasunise.com/glm-5v-turbo-z-ais-multimodal-coding-model-is-worth-your-attention/">GLM-5V-Turbo: Z.AI&#8217;s Multimodal Coding Model Is Worth Your Attention</a> appeared first on <a href="https://thomasunise.com">Thomas Unise</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://thomasunise.com/glm-5v-turbo-z-ais-multimodal-coding-model-is-worth-your-attention/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure url="https://thomasunise.com/wp-content/uploads/2026/04/xiBNKWlIAS-kEhCO.mp4" length="15068066" type="video/mp4" />

			</item>
		<item>
		<title>The science is settled: Millennials are peak evolution</title>
		<link>https://thomasunise.com/the-science-is-settled-millennials-are-peak-evolution/</link>
					<comments>https://thomasunise.com/the-science-is-settled-millennials-are-peak-evolution/#respond</comments>
		
		<dc:creator><![CDATA[Thomas Unise]]></dc:creator>
		<pubDate>Sun, 22 Feb 2026 20:51:56 +0000</pubDate>
				<category><![CDATA[Life Strategy]]></category>
		<guid isPermaLink="false">https://thomasunise.com/?p=1112</guid>

					<description><![CDATA[<p>The Flynn Effect is dead. IQ scores are falling. And the generation you blamed for killing Applebee&#8217;s turned out to be the apex. &#160; Nobody asked millennials to be the...</p>
<p>The post <a href="https://thomasunise.com/the-science-is-settled-millennials-are-peak-evolution/">The science is settled: Millennials are peak evolution</a> appeared first on <a href="https://thomasunise.com">Thomas Unise</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h3><strong><img decoding="async" class="alignnone wp-image-1114 size-full" src="https://thomasunise.com/wp-content/uploads/2026/02/eeko_systems_millennials_are_peak_evolution_-v_7_946a95f5-6484-43bb-981c-171b344859a4_0-e1771793388117.png" alt="" width="1024" height="773" srcset="https://thomasunise.com/wp-content/uploads/2026/02/eeko_systems_millennials_are_peak_evolution_-v_7_946a95f5-6484-43bb-981c-171b344859a4_0-e1771793388117.png 1024w, https://thomasunise.com/wp-content/uploads/2026/02/eeko_systems_millennials_are_peak_evolution_-v_7_946a95f5-6484-43bb-981c-171b344859a4_0-e1771793388117-300x226.png 300w, https://thomasunise.com/wp-content/uploads/2026/02/eeko_systems_millennials_are_peak_evolution_-v_7_946a95f5-6484-43bb-981c-171b344859a4_0-e1771793388117-768x580.png 768w, https://thomasunise.com/wp-content/uploads/2026/02/eeko_systems_millennials_are_peak_evolution_-v_7_946a95f5-6484-43bb-981c-171b344859a4_0-e1771793388117-200x151.png 200w" sizes="(max-width: 1024px) 100vw, 1024px" /></strong></h3>
<h3>The Flynn Effect is dead. IQ scores are falling. And the generation you blamed for killing Applebee&#8217;s turned out to be the apex.</h3>
<p>&nbsp;</p>
<p>Nobody asked millennials to be the pinnacle of measured human intelligence. But here we are.</p>
<p>For twenty years, every magazine, news segment, and Thanksgiving dinner featured some variation of the same thesis: millennials are soft, entitled, financially illiterate, and allergic to hard work.</p>
<p>We killed the housing market, the diamond industry, napkins, golf, department stores, and apparently the entire concept of fabric softener. We&#8217;re a generation of coddled, avocado-loving, participation-trophy-clutching disappointments.</p>
<p>Then the data came in.</p>
<p>Turns out the IQ escalator that had been running up since  the 1930s, carrying each successive generation a few points higher than the last, stopped moving right after we stepped off.</p>
<p>Not stalled. Reversed.</p>
<p>Gen Z, the first cohort raised entirely inside the screen, is the first generation in modern history to score <em>lower</em> on standardised cognitive assessments than the one before them.</p>
<p>Which means millennials are the top. The peak. The final firmware update before the system started degrading.</p>
<p>We didn&#8217;t earn it. We didn&#8217;t plan it. But we&#8217;re absolutely going to be insufferable about it.</p>
<h2>What is The Flynn Effect?</h2>
<p>&nbsp;</p>
<p>The Flynn Effect was one of the tidiest findings in cognitive science.</p>
<p>It&#8217;s named after James Flynn, who documented the trend in the 1980s, it described a remarkably consistent pattern: each generation scored roughly three to five IQ points higher than the last.</p>
<p>Decade after decade, across dozens of countries, the line went up. Your grandparents were sharper than their grandparents. Your parents topped theirs. Millennials topped boomers, which in hindsight should have been treated as the civilisational red flag it was.</p>
<p>Then, sometime in the 1990s, the line bent downward. And it kept bending.</p>
<p>This isn&#8217;t one alarming study from one small country. Bratsberg and Rogeberg published their landmark findings out of Norway in 2018, showing sustained IQ declines in men born after the early 1990s. Similar reversals have been documented in Denmark, Finland, France, the Netherlands, the UK, and the United States.</p>
<p>The pattern is now broad enough and consistent enough that the academic debate has shifted from &#8220;is this real&#8221; to &#8220;how bad is this going to get.&#8221;</p>
<p>The candidate explanations are a buffet of modern anxieties. Changes in educational philosophy. Nutritional decline. Environmental contaminants. The microplastics that have colonized every organ in your body.</p>
<p>But the explanation that makes everyone the most uncomfortable is also the most obvious: that outsourcing  your cognitive development to a device specifically engineered to hijack your dopamine circuitry might not produce the same neurological outcomes as, say, reading a book because there was literally nothing else to do on a Tuesday afternoon in 1994.</p>
<p>The researchers are trying to be polite about this.</p>
<p>They use phrases like &#8220;changes in cognitive stimulation patterns&#8221; and &#8220;shifts in leisure time allocation.&#8221;</p>
<p>What they mean is: we gave children a capitalist algorithm disguised as a distraction tool and then acted surprised when their abstract reasoning scores declined.</p>
<p>Whatever the cause, the trajectory is clear. The Flynn Effect climbed for roughly eighty years, peaked with millennials, and then reversed. We are standing on the summit of a mountain that took a century to build, looking around, and slowly realizing that maybe nobody else is coming up.</p>
<h2>Is it because we were bored?</h2>
<p>&nbsp;</p>
<p>IQ scores are one thing. They measure something, even if nobody can fully agree on what that is.</p>
<p>But the stronger case for millennials has nothing to do with test performance and everything to do with a developmental accident that produced the most cognitively versatile generation in history.</p>
<p>Here is what childhood was in the late 1980s and early 1990s: it was boring.</p>
<p>Not TikTok is down today boring. Existentially boring.</p>
<p>Your options for entertainment were: whatever books were in the house, ride your bike in circles, drink from the hose, or whatever game you could invent with a stick and your own imagination.</p>
<p>That was pretty much it. That was the whole menu.</p>
<p>This was, as it turns out, is the equivalent of feeding a developing brain premium fuel.</p>
<p>Boredom forces the mind to generate its own stimulation. It activates the default mode network, which is the brain system responsible for creative thought, self-reflection, and the kind of abstract problem-solving that IQ tests are actually trying to measure.</p>
<p>Every cognitive scientist studying this network will tell you the same thing: unstructured mental downtime isn&#8217;t wasted time. It&#8217;s where the real cognitive construction happens.</p>
<p>We didn&#8217;t know that. We just thought Tuesdays were long.</p>
<p>Then the internet arrived.</p>
<p>Not the internet of 2025, with its personalized feeds and algorithmic curation. The internet of 1998. The ugly one.</p>
<p>The one that screamed at you through a modem for forty-five seconds before showing you a webpage that looked like a ransom note designed by a fourteen-year-old.</p>
<p>The internet where nothing worked right, where finding information required actual detective skills, where downloading a single song could give your family computer a disease that would take your dad until Easter to fix.</p>
<p>This internet was as obstacle course, not conveyor belt. And millennials learned to navigate it at exactly the age when the brain is most capable of absorbing new systems and integrating them into existing cognitive architecture.</p>
<p>Roughly ages ten to fifteen, when prefrontal plasticity is at its peak and the brain is essentially a biological sponge that hasn&#8217;t yet learned to be cynical about what it absorbs.</p>
<p>The result was a generation with a cognitive profile that has never existed before and will likely never exist again.</p>
<p>We had an analog foundation: deep reading, sustained attention, comfort with ambiguity, the ability to think without technological assistance.</p>
<p>We also developed digital fluency: intuitive understanding of complex systems, rapid adaptation to new tools, and (critically) a bone-deep scepticism about technology born from years of watching it fail, crash, and lie to you.</p>
<p>Gen X got the analog childhood but came to digital technology a bit too late. Their brains were already built. They use technology the way a fifty-year-old uses a skateboard: technically possible, fundamentally unnatural.</p>
<p>Gen Z got the digital immersion but never had the analog foundation. They were born into a world where every question had an instant answer and every impulse had an instant outlet.</p>
<p>They are fluent in the language of technology in the way that a fish is fluent in water. They don&#8217;t think about it. Which is precisely the problem, because you cannot critically evaluate a system you&#8217;ve never experienced the absence of.</p>
<p>Millennials are the only generation that got both. Old enough to have wired our brains on books, boredom, and the card catalogue at the public library. Young enough to have rewired them on the internet during peak neurological plasticity.</p>
<p>We are cognitive bilinguals in a world that is about to need exactly that.</p>
<h2>Millennials will dominate the AI world</h2>
<p>&nbsp;</p>
<p>All of this would be a fun bit of generational trivia if the world were staying the same. But the world is not staying the same.</p>
<p>The world is in the early stages of an AI transformation that is going to reorganize how every knowledge-based profession operates, and it is going to do so within the next decade.</p>
<p>And effective AI use, it turns out, requires a very specific cognitive toolkit. One that maps almost perfectly onto the millennial developmental profile.</p>
<p>Working well with AI is not about prompting. Any idiot can type a question into a chat box.</p>
<p>Working well with AI requires the ability to think clearly about a problem <em>before</em> you engage the tool.</p>
<p>It requires enough independent domain knowledge to evaluate whether the output is brilliant or a confident hallucination wearing a lab coat.</p>
<p>It requires comfort with iteration, the willingness to treat AI as a collaborator that needs direction rather than an oracle that dispenses truth.</p>
<p>And it requires the one thing that separates a useful AI operator from a dangerous one: the instinct to question what the machine gives you.</p>
<p>That instinct does not develop as much in people who grew up trusting technology implicitly because it always worked.</p>
<p>It develops in people who grew up watching technology lie, crash, and occasionally delete their homework.</p>
<p>It develops in people who learned to spot a sketchy website in 2003 because nobody had built a blue checkmark system yet and your only defense against misinformation was your own judgment.</p>
<p>It develops, in other words, in millennials.</p>
<p>Most Gen X executive uses AI like a vending machine. Insert query, expect product, get frustrated when the result isn&#8217;t perfect the first time.</p>
<p>They have the critical thinking but treat the technology as a black box because it arrived in their lives later.</p>
<p>The Gen Z analyst can prompt circles around everyone in the room but struggles to focus for longer than 10 minutes and can&#8217;t evaluate whether what comes back is any good because they have little domain experience.</p>
<p>They&#8217;ve also never had to build an argument from scratch without autocomplete, so they lack internal benchmarks for what good thinking looks like when it doesn&#8217;t come pre-packaged.</p>
<p>They don&#8217;t always know what they don&#8217;t know, which is the single most dangerous gap you can have when working with a system that sounds certain about everything, including the things it just made up.</p>
<p>The millennial sits in the middle.</p>
<p>We can think without the machine, which is what qualifies us to think with it.</p>
<p>We understand technology well enough to use it intuitively but poorly enough (or rather, we <em>remember</em> it being poor enough) to never fully trust it.</p>
<p>We are the generation that learned to build before we had Claude Code, and that&#8217;s exactly who you want operating the system.</p>
<p>So after twenty years of &#8220;millennials ruined everything&#8221; think pieces.</p>
<p>Twenty years of being told we were too fragile for the real world, too entitled for the job market, too broke for the housing market, too weird for the institution of marriage.</p>
<p>We were the first generation in modern American history to be worse off financially than our parents, and we were blamed for it, as if choosing between avocado toast and a down payment were an actual financial dilemma rather than a lazy columnist&#8217;s fever dream.</p>
<p>And now, at the precise moment when the most transformative technology since the printing press is reshaping every industry on Earth, the analog-to-digital developmental arc that defined millennial childhood turns out to be the ideal cognitive preparation for what&#8217;s coming.</p>
<p>We are the bridge generation.</p>
<p>The last generation that learned to think before the thinking tools showed up.</p>
<p>The IQ curve peaked with us, the digital fluency window closed behind us, and the AI revolution arrived just as we hit the prime of our working lives.</p>
<p>The universe doesn&#8217;t have a plan.</p>
<p>It just stumbles forward, breaks things, and every once in a while, through sheer accident, produces a cohort of humans whose particular combination of skills and timing is exactly what the moment requires.</p>
<p>That cohort, against every expectation and prediction, is a bunch of thirty-something-year-olds who can explain what a Dewey Decimal System is, deploy a machine learning pipeline, and remember a time when you had to print out MapQuest directions and hope for the best.</p>
<p>We are peak human. Not because we tried. Because we happened.</p>
<p>And the timing, for once in our financially cursed lives, is perfect.</p>
<p>&nbsp;</p>
<p>Follow me on X for more bangers: <a href="http://x.com/thomasunise">x.com/thomasunise</a></p>
<p>&nbsp;</p>
<p><em>**I would like to note that while millennials may represent the apex of measured human intelligence, this has not translated into affordable housing, functional retirement planning, or the ability to explain what we do for a living to our parents. Evolution, it seems, has its priorities.</em></p>
<p>&nbsp;</p>
<p>The post <a href="https://thomasunise.com/the-science-is-settled-millennials-are-peak-evolution/">The science is settled: Millennials are peak evolution</a> appeared first on <a href="https://thomasunise.com">Thomas Unise</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://thomasunise.com/the-science-is-settled-millennials-are-peak-evolution/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The Uncomfortable Math of Working for Yourself</title>
		<link>https://thomasunise.com/the-uncomfortable-math-of-working-for-yourself/</link>
					<comments>https://thomasunise.com/the-uncomfortable-math-of-working-for-yourself/#respond</comments>
		
		<dc:creator><![CDATA[Thomas Unise]]></dc:creator>
		<pubDate>Thu, 22 Jan 2026 21:22:53 +0000</pubDate>
				<category><![CDATA[Entrepreneurship]]></category>
		<guid isPermaLink="false">https://thomasunise.com/?p=937</guid>

					<description><![CDATA[<p>Nobody Tells You Freedom Has a Billing Rate This year in 2026, I turn 40. I&#8217;ve been self-employed since I was 25. That&#8217;s fifteen years of answering to no one,...</p>
<p>The post <a href="https://thomasunise.com/the-uncomfortable-math-of-working-for-yourself/">The Uncomfortable Math of Working for Yourself</a> appeared first on <a href="https://thomasunise.com">Thomas Unise</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold"><img decoding="async" class="alignnone wp-image-942 size-full" src="https://thomasunise.com/wp-content/uploads/2026/01/eeko_systems_The_Uncomfortable_Math_of_Working_for_Yourself_-_a5aebe8a-0683-4ec4-81fe-b61fc547154c_2-e1769135630490.png" alt="" width="1024" height="630" srcset="https://thomasunise.com/wp-content/uploads/2026/01/eeko_systems_The_Uncomfortable_Math_of_Working_for_Yourself_-_a5aebe8a-0683-4ec4-81fe-b61fc547154c_2-e1769135630490.png 1024w, https://thomasunise.com/wp-content/uploads/2026/01/eeko_systems_The_Uncomfortable_Math_of_Working_for_Yourself_-_a5aebe8a-0683-4ec4-81fe-b61fc547154c_2-e1769135630490-300x185.png 300w, https://thomasunise.com/wp-content/uploads/2026/01/eeko_systems_The_Uncomfortable_Math_of_Working_for_Yourself_-_a5aebe8a-0683-4ec4-81fe-b61fc547154c_2-e1769135630490-768x473.png 768w, https://thomasunise.com/wp-content/uploads/2026/01/eeko_systems_The_Uncomfortable_Math_of_Working_for_Yourself_-_a5aebe8a-0683-4ec4-81fe-b61fc547154c_2-e1769135630490-200x123.png 200w" sizes="(max-width: 1024px) 100vw, 1024px" /></h2>
<h2 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">Nobody Tells You Freedom Has a Billing Rate</h2>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">This year in 2026, I turn 40. I&#8217;ve been self-employed since I was 25.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">That&#8217;s fifteen years of answering to no one, setting my own hours, and being my own boss, which, as it turns out, means answering to everyone, working all the hours, and reporting to the most unreasonable boss I&#8217;ve ever had.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">I mostly love what I do. I want to be clear about that before I say what I&#8217;m about to say.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">I am not writing this from a place of regret. I&#8217;m writing it from a place of hard-won clarity, the kind you get after watching the entrepreneurship narrative play out in your own life long enough to see where the brochure diverges from the building.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><strong>Here&#8217;s what I&#8217;ve learned:</strong> we need to stop telling people that starting a business is the ultimate unlock to freedom and wealth. Because for most people, it isn&#8217;t. And the continued perpetuation of this myth is doing real damage to real people making real decisions about their one finite life.</p>
<h2 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">Entrepreneurship Is a Job You Can&#8217;t Quit</h2>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">There&#8217;s a particular irony that only the self-employed will truly understand.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">You leave a job because you want more control over your time, more autonomy, more upside. And then, slowly, through a series of completely logical decisions, you build yourself into a new job. One with worse benefits, no paid vacation, and a manager who sends emails at 2 AM because he doesn&#8217;t understand boundaries.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">That manager is you, of course. And they&#8217;re kind of a nightmare.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Most entrepreneurs don&#8217;t build businesses. They build jobs. Jobs where they are simultaneously the janitor and the CEO, the sales team and the fulfillment department, the visionary and the person updating the WordPress plugins at midnight because the site went down again.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">And that freedom everyone talks about?</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Sure, it exists, theoretically. You <em>can</em> take Tuesday off. You can work from anywhere. You can structure your day however you want. But freedom without financial margin is just anxiety with flexible hours. And for most self-employed people, the margin is thin and a Tuesday off means Wednesday is going to suck.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">I&#8217;ve been on vacations where I spent four hours a day on my laptop in the hotel room looking at ad dashboards. I&#8217;ve missed family events for client emergencies. I&#8217;ve had months where the &#8220;freedom&#8221; of self-employment meant the freedom to not go out on the weekend or pay myself so I could make it through the month. That&#8217;s not even a complaint, it&#8217;s just the math.</p>
<h2 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">Survivors Are Loud; Statistics Are Quiet</h2>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Here&#8217;s the thing no one acknowledges about the entrepreneurship narrative: it&#8217;s written by survivors.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The people who write the books, give the TED talks, and post the LinkedIn manifestos about &#8220;betting on yourself&#8221; are, by definition, the ones for whom the bet paid off. You don&#8217;t hear much from the ones who went back to traditional employment after three years of grinding, a depleted savings account, and a marriage that got stress-tested past its limits.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">This creates a sampling bias so severe it borders on disinformation. We hear about the wins because the wins are worth talking about. We don&#8217;t hear about the quiet returns to cubicles, the businesses that &#8220;pivoted&#8221; into shutting down, the entrepreneurs who discovered that their appetite for risk exceeded their tolerance.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The outcomes you see and read about are uncommon.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">They&#8217;re not impossible, and I&#8217;m not trying to suggest they are. But they are exceptional by definition.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The median outcome of starting a business is not a lifestyle brand and a beach house. The median outcome is a lot of work for uncertain returns, followed by a return to employment with a few good stories and a complicated relationship with the phrase &#8220;passive income.&#8221;</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Meanwhile, the boring path; the one where you join an organization, develop expertise, grow with the company, get promoted, and make solid investments over time produces successful outcomes so routinely that nobody writes articles about it. There&#8217;s no memoir in becoming a senior director over twelve years and maxing out your 401(k). But there might be a lake house.</p>
<h2 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">The Rocket Ship Theory of Career Success</h2>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Here&#8217;s a framework I wish someone had given me at 25: instead of asking &#8220;How do I start something?&#8221;, ask &#8220;What&#8217;s already moving that I can get aboard?&#8221;</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The most reliable path to wealth and stability isn&#8217;t building a rocket ship from scratch. It&#8217;s finding one that&#8217;s already achieved escape velocity and getting yourself a seat.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">This means identifying organizations with genuine momentum, companies that are growing, industries that are expanding, teams that are winning, and positioning yourself to ride that trajectory upward.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">This has nothing to do with being unambitious. Quite the opposite.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">It&#8217;s about being strategic with the one career you have. When you join a rocket ship, several things happen that are nearly impossible to replicate as a solo operator:</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><strong>Your growth is subsidized.</strong> Someone else is paying for your education, your mistakes, your professional development. You get to learn on their dime while they pay you for the privilege.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><strong>Your network compounds automatically.</strong> Every colleague, every client relationship, every cross-functional project adds nodes to a professional network that will pay dividends for decades. This happens almost passively when you&#8217;re embedded in an organization. As a self-employed person, networking is a deliberate, time-consuming activity that competes with billable work.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Guess which one usually wins?</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Your upside may be capped but your downside is also floored.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Yes, you probably won&#8217;t become a billionaire working for someone else. But you also probably won&#8217;t have a month where you make negative money. The stability isn&#8217;t exciting, but excitement gets old quick when you&#8217;re trying to build a life.</p>
<h2 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">Add In The Isolation</h2>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Nobody talks about how lonely self-employment can be. Or rather, people talk about it in some romantic, misunderstood-artist kind of way, as if solitude is a feature rather than a bug.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Trust me, it&#8217;s a bug.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">When you work for yourself, you lose access to the professional development that happens organically in organizations. The hallway conversations, the lunch discussions about industry trends, the osmotic absorption of how things work at scale.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">You lose access to mentors who are invested in your growth because your growth benefits them. You lose access to peers who challenge your assumptions because they&#8217;re stuck with you every day.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Instead, you get Twitter threads and online communities and masterminds you pay to attend.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">These aren&#8217;t nothing, but they&#8217;re not the same as being embedded in an institution where your professional growth is structurally supported rather than something you have to manufacture in your spare time.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">I&#8217;ve spent fifteen years building expertise in relative isolation. I&#8217;m good at what I do, some would say really good, but I sometimes wonder how much further along I&#8217;d be if I&#8217;d spent those years inside organizations with resources, training budgets, and senior people who had reasons to invest in my development beyond my ability to pay them.</p>
<h2 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">And The Wealth Building</h2>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Here&#8217;s where the math gets uncomfortable for the entrepreneurship narrative.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">If you take a stable job with a growing organization and invest consistently in low-cost index funds and real estate over a 30-year career, your probability of retiring with significant wealth is remarkably high. Not guaranteed because nothing is guaranteed, but high enough that actuaries build pension models around it.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The S&amp;P 500 has averaged roughly 10% annual returns over the long term. Real estate, held over decades, appreciates while providing tax advantages and forced savings through mortgage paydown. A person who earns a solid salary, lives below their means, and invests the difference, will, with high probability, turn 60 with real wealth.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Now compare that to the entrepreneurship path.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The failure rate of small businesses is well-documented, and depending on how you define failure, genuinely grim. The ones that do actually survive often provide a living but not wealth. The ones that provide wealth are the exception, not the rule.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">I&#8217;m not saying don&#8217;t take the risk. I took the risk, and I&#8217;d probably take it again.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">But I am saying: understand the probabilities. Understand that the boring path has better odds. Understand that the stories told about entrepreneurship are survivor bias compressed into motivation.</p>
<h2 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">So What&#8217;s The Move?</h2>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Honestly, I have no idea. If I&#8217;m being this clear-eyed about the math, and why have I done this for fifteen years, and Why do I continue to do it, is mostly because some of us just aren&#8217;t wired for the alternative.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">We don&#8217;t do well with authority.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">And also because the thing that makes entrepreneurship hard; the responsibility, the uncertainty, the requirement that you figure it out yourself, is also compelling.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">But that&#8217;s <em>my</em> answer. That&#8217;s not advice. It&#8217;s a personality trait that I&#8217;ve been lucky enough to channel into something sustainable.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">What I&#8217;m trying to say here, is, as I approach the age where &#8220;someday&#8221; is getting a lot closer than it used to be, I&#8217;ve come to realize that we owe people a more honest conversation about self employment.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">We owe them the math alongside the mythology. We owe them the understanding that working for yourself is a path, not <em>the</em> path, and that the other paths deserve more respect than the hustle culture industrial complex has given them.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">If you&#8217;re reading this and you&#8217;ve been feeling guilty about not starting something, not betting on yourself, not taking the leap. I want you to know that the guilt is manufactured. It&#8217;s a byproduct of a narrative that benefits the people selling courses about how to escape the 9-to-5, not necessarily the people sitting in the audience.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Finding an organization with momentum, contributing to something larger than yourself, building a career through steady growth and strategic moves, that is not settling. It&#8217;s not playing it safe in some pejorative sense. It&#8217;s a legitimate strategy with better odds than the one being sold to you on instagram, youtube and tiktok by people who need you to believe that their path is the only respectable option.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">You can build wealth. You can have a meaningful career. You can maintain your dignity and your ambition. And you can do it while clocking out at 5:30 and not checking your email until Monday.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">That&#8217;s not failure. That&#8217;s success with better boundaries.</p>
<h2 class="text-text-100 mt-3 -mb-1 text-[1.125rem] font-bold">Thoughts at 39</h2>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">I turn 40 soon. I&#8217;ve spent fifteen years building something no one really cares about but me,  learning things I couldn&#8217;t have learned any other way, and making every mistake available in the self-employment catalog at least twice.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">I don&#8217;t regret it. But I also don&#8217;t pretend it was the only way to build a good life. Or even the best one.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">If you&#8217;re young and ambitious and trying to figure out what to do with your career, here&#8217;s my advice: be honest with yourself about what you actually want, what you&#8217;re actually willing to endure, and what outcomes you&#8217;re actually optimizing for.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">If you want freedom, understand what freedom actually costs. If you want wealth, look at where wealth actually comes from.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">If you want meaning, consider that meaning might be found in more places than the entrepreneurship narrative would have you believe.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">There are lots of rocket ships are out there filled with good humans. Some of them have seats available. Getting aboard one might be the smartest thing you ever do.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">And if you do decide to build your own? Know what you&#8217;re signing up for. The brochure lies. The freedom is conditional. The boss is terrible.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">But if you&#8217;re cut from the right cloth, its totally worth it.</p>
<p>&nbsp;</p>
<p>The post <a href="https://thomasunise.com/the-uncomfortable-math-of-working-for-yourself/">The Uncomfortable Math of Working for Yourself</a> appeared first on <a href="https://thomasunise.com">Thomas Unise</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://thomasunise.com/the-uncomfortable-math-of-working-for-yourself/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
