<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="/feed.xml" rel="self" type="application/atom+xml" /><link href="/" rel="alternate" type="text/html" /><updated>2026-02-12T18:34:24+00:00</updated><id>/feed.xml</id><title type="html">Nate Wiger</title><subtitle>How many beers until the bugs are fixed?</subtitle><author><name>Nate Wiger</name></author><entry><title type="html">✍🏼 Converting from Wordpress to GitHub Pages with Claude</title><link href="/2026/02/09/converting-from-wordpress-to-jekyll-with-claude/" rel="alternate" type="text/html" title="✍🏼 Converting from Wordpress to GitHub Pages with Claude" /><published>2026-02-09T20:00:00+00:00</published><updated>2026-02-09T20:00:00+00:00</updated><id>/2026/02/09/converting-from-wordpress-to-jekyll-with-claude</id><content type="html" xml:base="/2026/02/09/converting-from-wordpress-to-jekyll-with-claude/"><![CDATA[<p>I’ve used Wordpress for many many years, but I got to the point where I was tired of paying $100/year for a blog that gets a few hundred visits per month. I’ve played around with static site generators like <a href="https://jekyllrb.com/">Jekyll</a> but have always dreaded the amount of time it would take to convert my existing Wordpress blog. Then I realized one evening: Why would I do this myself in the age of AI? Enter Claude Code.</p>

<h2 id="the-prompt">The Prompt</h2>

<p>Converting a blog is a major PITA. You have to download the existing Wordpress articles, port them to Markdown, change all the image links, make sure code is formatted correctly, etc, etc. Certainly I could get Claude to do this for me? What I didn’t expect was it would do a much better job than me in a fraction of the time.</p>

<p>I followed the <a href="https://docs.github.com/en/pages/quickstart">GitHub Pages Quickstart</a> to setup a fresh repo for my new blog. At this point it was just a skeleton with no content. Then I opened it in Claude Code and issued this prompt:</p>

<blockquote>
  <p>this is a personal blog written in jekyll. i want to convert my existing wordpress blog into jekyll posts. my blog is located at: nateware.com . for each post on my blog, please create a corresponding markdown file. respect any of the formatting in my existing blog - headers, code segments, links, etc. for each existing blog post, create a new jekyll markdown file in the _posts directory formatted as YYYY-MM-DD-[slug from existing blog]. For example my blog article located at https://nateware.com/2010/02/18/an-atomic-rant/ should be downloaded, converted to markdown, and saved as _posts/2010-02-18-an-atomic-rant.md. Create a plan</p>
</blockquote>

<p>Claude then generated a plan so that for each post it would:</p>

<ol>
  <li>Fetch the full page content via WebFetch</li>
  <li>Extract title, categories, and tags</li>
  <li>Convert HTML body to clean Markdown</li>
  <li>Write file to <code class="language-plaintext highlighter-rouge">_posts/YYYY-MM-DD-slug.md</code></li>
</ol>

<p>It even added this step that I didn’t ask for:</p>

<ul>
  <li>Add <code class="language-plaintext highlighter-rouge">permalink: /:year/:month/:day/:title/</code> to <code class="language-plaintext highlighter-rouge">_config.yml</code> to match existing WordPress URL structure for SEO/link preservation</li>
</ul>

<p>When it came to images, Claude came up with this strategy:</p>

<ul>
  <li>Download images referenced in posts to <code class="language-plaintext highlighter-rouge">assets/images/</code></li>
  <li>Update markdown references to <code class="language-plaintext highlighter-rouge">/assets/images/filename.ext</code></li>
  <li>3 older images (replacing-macbook-ssd post) recovered from Wayback Machine since originals no longer on WordPress <strong>(WHAT?!)</strong></li>
</ul>

<p>The last one blew my mind - Claude was “smart” enough to detect that an image link was broken, and rather than giving up, it knew to go to the wayback machine and find it from a previous snapshot. That’s crazy. I don’t think I would have even thought of that.</p>

<h2 id="the-results">The Results</h2>

<p>You’re seeing it for yourself. Other than editing this blog post, I did no coding or conversion steps myself. Everything was automated via Claude Code. You can view the full repo at: <a href="https://github.com/nateware/nateware.github.io">nateware.github.io</a></p>

<p>Up next is writing a full app “hands off the wheel” style. More to come…</p>]]></content><author><name>Nate Wiger</name></author><category term="ai" /><category term="technology" /><category term="ai" /><category term="claude" /><category term="jekyll" /><summary type="html"><![CDATA[I’ve used Wordpress for many many years, but I got to the point where I was tired of paying $100/year for a blog that gets a few hundred visits per month. I’ve played around with static site generators like Jekyll but have always dreaded the amount of time it would take to convert my existing Wordpress blog. Then I realized one evening: Why would I do this myself in the age of AI? Enter Claude Code.]]></summary></entry><entry><title type="html">📈 Nate’s Stock Market Theory of Management</title><link href="/2019/07/09/nates-stock-market-theory-of-management/" rel="alternate" type="text/html" title="📈 Nate’s Stock Market Theory of Management" /><published>2019-07-09T20:00:00+00:00</published><updated>2019-07-09T20:00:00+00:00</updated><id>/2019/07/09/nates-stock-market-theory-of-management</id><content type="html" xml:base="/2019/07/09/nates-stock-market-theory-of-management/"><![CDATA[<p>Life at a fast-moving company is full of swings, highs, and lows. But managers often fail in one of their most important responsibilities: providing stability for their teams. Managing teams is like stock market investing. Here’s how.</p>

<p>In software, there may be launches, bugs, or service outages that cause different individuals in the organization to go back and forth rapidly. In operations, there can be holiday sales, labor strikes, or equipment issues that cause huge variations in day-to-day work. This churn is often visible, through email escalations, phone alerts, or literal flashing red lights.</p>

<p>As a manager, you are guiding your teams, helping them release products and triage issues. But, you’re not sitting side-by-side with every engineer, experiencing every bug fix with them. Your job is to smooth out bumps and valleys, and keep the team together as a unit. In times of crisis, you are there to calm them. In times of change, you are there to guide them through.</p>

<p><strong>You are a smoothing function, like a moving average in a stock market graph.</strong></p>

<p><img src="/assets/images/moving-average-mgmt.png" alt="Moving Average Management" /></p>

<p>The amount of smoothing you do is dependent on your role. As a front-line manager (the red trend line), you need to respond to day-to-day events that impact your team. But as a more senior manager, you should <strong>not</strong> respond too quickly. Overreacting to daily events leads to “knee-jerk reactions” or “<a href="https://en.wikipedia.org/wiki/Seagull_manager">seagull management</a>”.</p>

<p>As you move into more senior management roles, you take on a broader perspective, and a longer-term view. Rather than managing one team and thinking daily or weekly, you are managing multiple teams and thinking monthly or quarterly (the green trend line). At the executive level, you are looking out 6-12 months and creating multi-year plans (the blue trend line). Your job is to provide a stable vision for the team, a North Star to navigate towards. In stock market terms, you are a daily, then a 50-day, then a 200-day moving average for your team.</p>

<h2 id="keeping-in-sync">Keeping in Sync</h2>

<p>The stock market moving average analogy can be taken further. You’ll notice in the graph I chose that each of the management layers is somewhat out-of-sync. It gets particularly pronounced in the middle section, where lines cross and move in opposite directions. In our analogy, this could represent a change in strategy, or an internal reorganization. Eventually, leadership realigns, and the team can move forward.</p>

<p><img src="/assets/images/moving-average-mgmt-change.png" alt="Moving Average Management Change" /></p>

<p>Notice that while leadership is not aligned, the team vacillates back and forth. When teams report feeling “churn”, this is what they are feeling.</p>

<p>The longer time period a moving average reflects, the more it can be out-of-sync with daily events. In the above graph, the Director and VP/SVP levels are stable on the down and up swings. But their version of reality is not entirely in sync with what is happening day-to-day. This is a common challenge for senior leadership.</p>

<p>This is where empowerment comes in. I would argue the above graph is healthy, <strong>if</strong> the front-line manager (the red trend line) is empowered appropriately. The long-term trend lines focus on the long-term, and the short-term trend lines focus on the short-term.</p>

<p>Having spent almost 7 years at Amazon, this is what Amazon does really well. Every “two-pizza team” owns their own destiny – the tools they use, the coding methods they follow, the internal systems they reuse (or don’t), the scrum discipline they adopt, and so forth.</p>

<p>I witnessed numerous Amazon new hires experience extreme culture shock. Over the years I heard people comment that it seemed like “general anarchy”, “barely controlled chaos”, or even simply, “I can’t believe a company can operate this way”. But consider the reverse. Think if every little decision about every feature, or toolset, or architecture choice had to go up to the VP layer (the blue trend line). The entire company would grind to a halt. Instead, it’s a rocket ship.</p>

<h2 id="try-this">Try This</h2>

<p>First, figure out what smoothing line you are supposed to be. Are you a front-line manager? Director? SVP? Make sure you are acting appropriately.</p>

<p>Second, are you empowering your people? Empowerment is a big one that pays off. If you challenge people with a stretch goal and tell them “I believe in you”, they can do amazing things. (Shocking, I know.)</p>

<p>Finally, if you have any great stock tips, let me know.</p>]]></content><author><name>Nate Wiger</name></author><category term="leadership" /><category term="leadership" /><summary type="html"><![CDATA[Life at a fast-moving company is full of swings, highs, and lows. But managers often fail in one of their most important responsibilities: providing stability for their teams. Managing teams is like stock market investing. Here’s how.]]></summary></entry><entry><title type="html">🍩 Donut Based Security at Amazon</title><link href="/2019/04/08/donut-based-security/" rel="alternate" type="text/html" title="🍩 Donut Based Security at Amazon" /><published>2019-04-08T20:00:00+00:00</published><updated>2019-04-08T20:00:00+00:00</updated><id>/2019/04/08/donut-based-security</id><content type="html" xml:base="/2019/04/08/donut-based-security/"><![CDATA[<p>This is not a clever technical article where DONUT is some obscure new encryption algorithm. This is about getting people to lock their laptop screens. Using donuts.</p>

<p>In the early days of the Amazon San Diego office, we were in an unsecured, shared office space with other companies. As such, it was crucial that people remembered to lock their screens whenever they left their computers, even if only for a few minutes. But, humans forget, and we needed a way to actively catch them and help correct their behavior.</p>

<p><img src="/assets/images/cyber-theft-1-728.jpg" alt="Cyber theft" height="140" style="float:left; margin-right: 10px" /></p>

<p>A normal company probably would have put up posters and sent out emails about the importance of locking screens – which would have been promptly deleted and ignored. Or had the managers reminding employees about the importance of security blah blah blah. Or created a 45-minute training video about the dangers of cybertheft with spooky looking cartoon bad guys.</p>

<p>Of course, none of these work, and worse yet, they are BORING.</p>

<h2 id="what-we-did-was-this">What we did was this.</h2>

<p>If we stumbled upon an unlocked laptop screen, we would send out an email from that person’s account:</p>

<blockquote>
  <p>To: sandiego-all@amazon.com</p>

  <p>Subject: Free donuts tomorrow!</p>

  <p>Hey everyone, I realized we haven’t had donuts in a while, so I’ll bring in a box tomorrow for everyone! Enjoy!</p>
</blockquote>

<p>You had to be fast, since the person could come back at any moment. The key was to not get caught, so they had no idea who did it.</p>

<p>If your computer was used to send such an email, you were duty-bound to bring donuts the next day. No ifs, ands, or buts. You had been donuted.</p>

<p><img src="/assets/images/vg-donuts.jpg" alt="VG Bakery donuts" /></p>

<p>This was remarkably effective. Over time the donuts decreased in frequency, which was a little disappointing from a stomachular perspective, but showed it was effective from a security perspective.</p>

<p>There are a few reasons why this worked:</p>

<ol>
  <li><strong>We made it a game.</strong> Everyone could participate. It was fun.</li>
  <li><strong>Humans hate being embarrassed.</strong> This was a mild sting, but it still stung enough for people to remember.</li>
  <li><strong>There was an inconvenience factor.</strong> Now you had to drive and buy donuts tomorrow.</li>
  <li><strong>There was social pressure.</strong> Everyone knew what was expected. Nobody ever failed to bring in donuts.</li>
</ol>

<p>The cool thing is this game spread organically. I remember sending the first email out. It was a coworker’s computer – a senior Amazonian who should have known better. He dutifully brought donuts in the next day. From there it caught on like wildfire. (I also donuted him another half dozen times before he learned – I swear he was the worst at locking his screen!)</p>

<p>Once the precedent was set, the game was on. Who said enforcing security was no fun?</p>]]></content><author><name>Nate Wiger</name></author><category term="leadership" /><category term="leadership" /><category term="security" /><summary type="html"><![CDATA[This is not a clever technical article where DONUT is some obscure new encryption algorithm. This is about getting people to lock their laptop screens. Using donuts.]]></summary></entry><entry><title type="html">🏄‍♂️ How Amazon Ended Up in San Diego</title><link href="/2019/02/04/how-amazon-ended-up-in-san-diego/" rel="alternate" type="text/html" title="🏄‍♂️ How Amazon Ended Up in San Diego" /><published>2019-02-04T20:00:00+00:00</published><updated>2019-02-04T20:00:00+00:00</updated><id>/2019/02/04/how-amazon-ended-up-in-san-diego</id><content type="html" xml:base="/2019/02/04/how-amazon-ended-up-in-san-diego/"><![CDATA[<p>One of my favorite career accomplishments so far was founding the Amazon San Diego office. I wrote the 6-pager proposal and presented it to Amazon VP/SVPs to gain approval. I was employee #1 in the office, hiring a team that grew to 300+ people in less than 3 years (now it’s &gt;2,000). Here’s a bit more of the behind the scenes.</p>

<p>I chose all three locations – first the temp space in Solana Beach, then the interim space in UTC, then the final office at Campus Point where the office lives today. I got to meet the mayor and be interviewed on TV which was a ton of fun. You can read all about it in this <a href="https://blog.aboutamazon.com/working-at-amazon/how-amazon-ended-up-in-san-diego">Amazon Day One blog article</a>. There’s another more lighthearted article that talks about the space itself on <a href="http://www.hatch-mag.com/2018/12/18/inside-amazon-utc-area-tech-hub/">Hatch</a>.</p>

<p><img src="/assets/images/amazon-san-diego.jpeg" alt="Amazon San Diego office" /></p>

<p>What those articles don’t highlight is the huge team effort it took to get us there. Each manager at the office owned a different major initiative. One oversaw our hiring pipeline, while another ran mixers and events, while another keep the office stocked with snacks and beer. At the very start, the office didn’t have a printer, so another leader and I drove to Staples, bought one, brought it back in his truck, and plugged it in. We had 4 people squeezed into 2-person offices and it was a blast.</p>

<p>Hiring the right people, mentoring them to become leaders, and leaving behind an office that continues to thrive was incredibly rewarding. It taught me a ton about what it means to be a good leader. It’s not about telling people what to do or delegating tasks. It’s about inspiring people with a big hairy goal, empowering and supporting them, but then generally getting the hell out of their way.</p>

<p>This can be a very uncomfortable feeling. If you do it right, you won’t have details on everything happening under your watch. People will be making decisions on their own and acting autonomously. You’ll find out things that have been happening for weeks that you were completely unaware of. The trick is, when you find out, do those things make you go “cool!” or “holy shit why are they doing THAT??”</p>

<p>I’m not going to lie, there were definitely several “oh shit” moments – but most of the time it was pleasant surprises. Credit goes to the phenomenal management and senior engineering talent we were able to hire. For my part, I tried to be clear in what was most important. Rather than telling people what to do, I just tried to inspire them by why we were there in the first place. We were colonizing a whole new city for Amazon – a pretty audacious goal to be part of.</p>

<p>Our mission as I saw it was:</p>

<ol>
  <li>Prove to people that San Diego was a real tech city – real enough to support top-tier companies like Amazon.</li>
  <li>Create a culture where people were proud of their work, felt appreciated, and had a fun time overall.</li>
  <li>Bring the San Diego influence back into Amazon, and demonstrate that you could do awesome things while still having fun.</li>
</ol>

<p>That’s basically it. I like keeping things simple.</p>

<p>To everyone who was involved (and continues to be involved), all I can say is thank you. It was an experience I’ll cherish the rest of my life and it’s hard for me to put into words how humbled I feel meeting so many incredible people. Can’t wait to hear when the office hits 1,000 people!</p>]]></content><author><name>Nate Wiger</name></author><category term="leadership" /><category term="leadership" /><summary type="html"><![CDATA[One of my favorite career accomplishments so far was founding the Amazon San Diego office. I wrote the 6-pager proposal and presented it to Amazon VP/SVPs to gain approval. I was employee #1 in the office, hiring a team that grew to 300+ people in less than 3 years (now it’s &gt;2,000). Here’s a bit more of the behind the scenes.]]></summary></entry><entry><title type="html">🧮 Advanced Game Analytics with AWS at GDC 2015</title><link href="/2015/03/28/advanced-game-analytics-with-aws-at-gdc-2015/" rel="alternate" type="text/html" title="🧮 Advanced Game Analytics with AWS at GDC 2015" /><published>2015-03-28T20:00:00+00:00</published><updated>2015-03-28T20:00:00+00:00</updated><id>/2015/03/28/advanced-game-analytics-with-aws-at-gdc-2015</id><content type="html" xml:base="/2015/03/28/advanced-game-analytics-with-aws-at-gdc-2015/"><![CDATA[<p>Based on the response to my <a href="/2014/03/21/game-analytics-with-aws-at-gdc-2014/">GDC 2014 talk</a>, I gave an expanded talk on game analytics at GDC 2015. You can <a href="https://www.gdcvault.com/play/1021876/Connecting-with-Your-Customers-Building">watch the video for free in the GDC Vault</a>.</p>

<p>Launching a successful free-to-play game or mobile app requires having in-depth, real-time analytics about your users. This is easier said than done, but hopefully the above video provides some insight.</p>]]></content><author><name>Nate Wiger</name></author><category term="aws" /><category term="gaming" /><category term="analytics" /><category term="aws" /><category term="gaming" /><summary type="html"><![CDATA[Based on the response to my GDC 2014 talk, I gave an expanded talk on game analytics at GDC 2015. You can watch the video for free in the GDC Vault.]]></summary></entry><entry><title type="html">🧮 Game Analytics with AWS at GDC 2014</title><link href="/2014/03/21/game-analytics-with-aws-at-gdc-2014/" rel="alternate" type="text/html" title="🧮 Game Analytics with AWS at GDC 2014" /><published>2014-03-21T20:00:00+00:00</published><updated>2014-03-21T20:00:00+00:00</updated><id>/2014/03/21/game-analytics-with-aws-at-gdc-2014</id><content type="html" xml:base="/2014/03/21/game-analytics-with-aws-at-gdc-2014/"><![CDATA[<p>I gave a talk at GDC 2014 all about game analytics and AWS. In the talk, I showed how to start small by uploading analytics files from users devices to S3, and then processing them with Redshift. As your game grows, add more data sources and AWS services such as Kinesis and Elastic MapReduce to perform more complex processing. Here are <a href="http://www.slideshare.net/slideshow/embed_code/32592688">the slides on Slideshare</a> and <a href="http://aws.amazon.com/game-hosting/GDC2014-videos/">the videos on YouTube</a>.</p>

<p>Free-to-play has become a ubiquitous strategy for publishing games, especially mobile and social games. Succeeding in free-to-play requires having razor-sharp analytics on your players, so you know what they love and what they hate. Free-to-play aside, having an awesome game has always been about maximizing the love and minimizing the hate. Charge a reasonable price for the things your players love and you have a simple but effective monetization strategy.</p>

<p>At the end of the talk, I blabbed a bit about what I see as the future of gaming: Big data and real-time analytics. The more in-tune you can get with your players, and the faster you can react, the more your game is going to differentiate itself. Recently there was a massive battle in <a href="http://www.eveonline.com/">EVE Online</a> that <a href="http://bigstory.ap.org/article/unpaid-bill-leads-game-battle-worth-200000">destroyed nearly $500,000 worth of ships and equipment</a>. Imagine being able to react in real-time, in the heat of battle, offering players discounted ammunition targeted at their fleet and status in battle. Some <a href="http://blog.eyesurf.info/?p=2727">estimate impulse buys to account for 40% of all ecommerce</a> meaning there is huge untapped potential for gaming in the analytics space.</p>]]></content><author><name>Nate Wiger</name></author><category term="aws" /><category term="gaming" /><category term="analytics" /><category term="aws" /><category term="gaming" /><summary type="html"><![CDATA[I gave a talk at GDC 2014 all about game analytics and AWS. In the talk, I showed how to start small by uploading analytics files from users devices to S3, and then processing them with Redshift. As your game grows, add more data sources and AWS services such as Kinesis and Elastic MapReduce to perform more complex processing. Here are the slides on Slideshare and the videos on YouTube.]]></summary></entry><entry><title type="html">🎮 Real-time Leaderboards with ElastiCache for Redis</title><link href="/2013/09/08/real-time-leaderboards-with-elasticache-for-redis/" rel="alternate" type="text/html" title="🎮 Real-time Leaderboards with ElastiCache for Redis" /><published>2013-09-08T20:00:00+00:00</published><updated>2013-09-08T20:00:00+00:00</updated><id>/2013/09/08/real-time-leaderboards-with-elasticache-for-redis</id><content type="html" xml:base="/2013/09/08/real-time-leaderboards-with-elasticache-for-redis/"><![CDATA[<p>With the launch of <a href="http://aws.typepad.com/aws/2013/09/amazon-elasticache-now-with-a-dash-of-redis.html">AWS ElastiCache for Redis</a> this week, I realized my <a href="http://github.com/nateware/redis-objects">redis-objects</a> gem could use a few more examples. Paste this code into your game’s Ruby backend for real-time leaderboards with Redis.</p>

<p><a href="http://redis.io/topics/data-types">Redis Sorted Sets</a> are the ideal data type for leaderboards. This is a data structure that guarantees uniqueness of members, plus keeps members sorted in real time. Yep that’s pretty much exactly what we want. The Redis sorted set commands to populate a leaderboard would be:</p>

<pre><code class="language-redis">ZADD leaderboard 556  "Andy"
ZADD leaderboard 819  "Barry"
ZADD leaderboard 105  "Carl"
ZADD leaderboard 1312 "Derek"
</code></pre>

<p>This would create a <code class="language-plaintext highlighter-rouge">leaderboard</code> set with members auto-sorted based on their score. To get a leaderboard sorted with highest score as highest ranked, do:</p>

<pre><code class="language-redis">ZREVRANGE leaderboard 0 -1
1) "Derek"
2) "Barry"
3) "Andy"
4) "Carl"
</code></pre>

<p>This returns the set’s members sorted in reverse (descending) order. Refer to the <a href="http://redis.io/commands/zrevrange">Redis docs for ZREVRANGE</a> for more details.</p>

<h2 id="wasnt-this-a-ruby-post">Wasn’t this a Ruby post?</h2>

<p>Back to <a href="http://github.com/nateware/redis-objects">redis-objects</a>. Let’s start with a direct Ruby translation of the above:</p>

<div class="language-ruby highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">require</span> <span class="s1">'redis-objects'</span>
<span class="no">Redis</span><span class="p">.</span><span class="nf">current</span> <span class="o">=</span> <span class="no">Redis</span><span class="p">.</span><span class="nf">new</span><span class="p">(</span><span class="ss">host: </span><span class="s1">'localhost'</span><span class="p">)</span>

<span class="n">lb</span> <span class="o">=</span> <span class="no">Redis</span><span class="o">::</span><span class="no">SortedSet</span><span class="p">.</span><span class="nf">new</span><span class="p">(</span><span class="s1">'leaderboard'</span><span class="p">)</span>
<span class="n">lb</span><span class="p">[</span><span class="s2">"Andy"</span><span class="p">]</span>  <span class="o">=</span> <span class="mi">556</span>
<span class="n">lb</span><span class="p">[</span><span class="s2">"Barry"</span><span class="p">]</span> <span class="o">=</span> <span class="mi">819</span>
<span class="n">lb</span><span class="p">[</span><span class="s2">"Carl"</span><span class="p">]</span>  <span class="o">=</span> <span class="mi">105</span>
<span class="n">lb</span><span class="p">[</span><span class="s2">"Derek"</span><span class="p">]</span> <span class="o">=</span> <span class="mi">1312</span>

<span class="nb">puts</span> <span class="n">lb</span><span class="p">.</span><span class="nf">revrange</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">)</span>  <span class="c1"># ["Derek", "Barry", "Andy", "Carl"]</span>
</code></pre></div></div>

<p>And… we’re done. Ship it.</p>

<h2 id="throw-that-on-rails">Throw that on Rails</h2>

<p>Ok, so our game probably has a bit more too it. Let’s assume there’s a <code class="language-plaintext highlighter-rouge">User</code> database table, with a <code class="language-plaintext highlighter-rouge">score</code> column, created like so:</p>

<div class="language-ruby highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">class</span> <span class="nc">CreateUsers</span> <span class="o">&lt;</span> <span class="no">ActiveRecord</span><span class="o">::</span><span class="no">Migration</span>
  <span class="k">def</span> <span class="nf">up</span>
    <span class="n">create_table</span> <span class="ss">:users</span> <span class="k">do</span> <span class="o">|</span><span class="n">t</span><span class="o">|</span>
      <span class="n">t</span><span class="p">.</span><span class="nf">string</span>  <span class="ss">:name</span>
      <span class="n">t</span><span class="p">.</span><span class="nf">integer</span> <span class="ss">:score</span>
    <span class="k">end</span>
  <span class="k">end</span>
<span class="k">end</span>
</code></pre></div></div>

<p>We can integrate a sorted set leaderboard with our <code class="language-plaintext highlighter-rouge">User</code> model in two lines:</p>

<div class="language-ruby highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">class</span> <span class="nc">User</span> <span class="o">&lt;</span> <span class="no">ActiveRecord</span><span class="o">::</span><span class="no">Base</span>
  <span class="kp">include</span> <span class="no">Redis</span><span class="o">::</span><span class="no">Objects</span>
  <span class="n">sorted_set</span> <span class="ss">:leaderboard</span><span class="p">,</span> <span class="ss">global: </span><span class="kp">true</span>
<span class="k">end</span>
</code></pre></div></div>

<p>Since we’re going to have just a single leaderboard (rather than one per user), we use the <code class="language-plaintext highlighter-rouge">global</code> flag. This will create a <code class="language-plaintext highlighter-rouge">User.leaderboard</code> sorted set that we can then access anywhere:</p>

<div class="language-ruby highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">puts</span> <span class="no">User</span><span class="p">.</span><span class="nf">leaderboard</span><span class="p">.</span><span class="nf">members</span>
</code></pre></div></div>

<p>(<strong>Important:</strong> This <em>doesn’t</em> have to be ActiveRecord – you could use Mongoid or DataMapper or Sequel or Dynamoid or any other DB model.)</p>

<p>We’ll add a hook to update our leaderboard when we get a new high score. Since we now have a database table, we’ll index our sorted set by our ID, since it’s guaranteed to be unique:</p>

<div class="language-ruby highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">class</span> <span class="nc">User</span> <span class="o">&lt;</span> <span class="no">ActiveRecord</span><span class="o">::</span><span class="no">Base</span>
  <span class="kp">include</span> <span class="no">Redis</span><span class="o">::</span><span class="no">Objects</span>
  <span class="n">sorted_set</span> <span class="ss">:leaderboard</span><span class="p">,</span> <span class="ss">global: </span><span class="kp">true</span>

  <span class="n">after_update</span> <span class="ss">:update_leaderboard</span>
  <span class="k">def</span> <span class="nf">update_leaderboard</span>
    <span class="nb">self</span><span class="p">.</span><span class="nf">class</span><span class="p">.</span><span class="nf">leaderboard</span><span class="p">[</span><span class="nb">id</span><span class="p">]</span> <span class="o">=</span> <span class="n">score</span>
  <span class="k">end</span>
<span class="k">end</span>
</code></pre></div></div>

<p>Save a few records:</p>

<div class="language-ruby highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="no">User</span><span class="p">.</span><span class="nf">create!</span><span class="p">(</span><span class="ss">name: </span><span class="s2">"Andy"</span><span class="p">,</span>  <span class="ss">score: </span><span class="mi">556</span><span class="p">)</span>
<span class="no">User</span><span class="p">.</span><span class="nf">create!</span><span class="p">(</span><span class="ss">name: </span><span class="s2">"Barry"</span><span class="p">,</span> <span class="ss">score: </span><span class="mi">819</span><span class="p">)</span>
<span class="no">User</span><span class="p">.</span><span class="nf">create!</span><span class="p">(</span><span class="ss">name: </span><span class="s2">"Carl"</span><span class="p">,</span>  <span class="ss">score: </span><span class="mi">105</span><span class="p">)</span>
<span class="no">User</span><span class="p">.</span><span class="nf">create!</span><span class="p">(</span><span class="ss">name: </span><span class="s2">"Derek"</span><span class="p">,</span> <span class="ss">score: </span><span class="mi">1312</span><span class="p">)</span>
</code></pre></div></div>

<p>Fetch the leaderboard:</p>

<div class="language-ruby highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="vi">@user_ids</span> <span class="o">=</span> <span class="no">User</span><span class="p">.</span><span class="nf">leaderboard</span><span class="p">.</span><span class="nf">revrange</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">)</span>
<span class="nb">puts</span> <span class="vi">@user_ids</span>  <span class="c1"># [4, 2, 1, 3]</span>
</code></pre></div></div>

<p>And now we have a Redis leaderboard sorted in real time, auto-updated any time we get a new high score.</p>

<h2 id="but-mysql-has-order-by">But MySQL has ORDER BY</h2>

<p>The skeptical reader may wonder why not just sort in MySQL, or whatever the kewl new database flavor of the week is. Outside of offloading our main database, things get more interesting when we want to know our own rank:</p>

<div class="language-ruby highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">class</span> <span class="nc">User</span> <span class="o">&lt;</span> <span class="no">ActiveRecord</span><span class="o">::</span><span class="no">Base</span>
  <span class="c1"># ... other stuff remains ...</span>

  <span class="k">def</span> <span class="nf">my_rank</span>
    <span class="nb">self</span><span class="p">.</span><span class="nf">class</span><span class="p">.</span><span class="nf">leaderboard</span><span class="p">.</span><span class="nf">revrank</span><span class="p">(</span><span class="nb">id</span><span class="p">)</span> <span class="o">+</span> <span class="mi">1</span>
  <span class="k">end</span>
<span class="k">end</span>
</code></pre></div></div>

<p>Then:</p>

<div class="language-ruby highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="vi">@user</span> <span class="o">=</span> <span class="no">User</span><span class="p">.</span><span class="nf">find</span><span class="p">(</span><span class="mi">1</span><span class="p">)</span> <span class="c1"># Andy</span>
<span class="nb">puts</span> <span class="vi">@user</span><span class="p">.</span><span class="nf">my_rank</span>   <span class="c1"># 3</span>
</code></pre></div></div>

<p>Getting a numeric rank for a row in MySQL would require adding a new “rank” column, and then running a job that re-ranks the entire table. Doing this in real time means clobbering MySQL with a global re-rank every time <em>anyone’s</em> score changes. This makes MySQL unhappy, especially with lots of users.</p>

<p>Kids are calling so that’s all for now. Enjoy!</p>]]></content><author><name>Nate Wiger</name></author><category term="aws" /><category term="gaming" /><category term="redis" /><category term="aws" /><category term="gaming" /><category term="redis" /><category term="ruby" /><summary type="html"><![CDATA[With the launch of AWS ElastiCache for Redis this week, I realized my redis-objects gem could use a few more examples. Paste this code into your game’s Ruby backend for real-time leaderboards with Redis.]]></summary></entry><entry><title type="html">🔧 Linux Network Tuning for 2013</title><link href="/2013/04/06/linux-network-tuning-for-2013/" rel="alternate" type="text/html" title="🔧 Linux Network Tuning for 2013" /><published>2013-04-06T20:00:00+00:00</published><updated>2013-04-06T20:00:00+00:00</updated><id>/2013/04/06/linux-network-tuning-for-2013</id><content type="html" xml:base="/2013/04/06/linux-network-tuning-for-2013/"><![CDATA[<p>Linux distributions still ship with the assumption that they will be multi-user systems, meaning resource limits are set for a normal human doing day-to-day desktop work. For a high-performance system trying to serve thousands of concurrent network clients, these limits are far too low. If you have an online game or web app that’s pushing the envelope, these settings can help increase awesomeness.</p>

<p>The parameters we’ll adjust are as follows:</p>

<ul>
  <li>
    <p>Increase max open files to 100,000 from the default (typically 1024). In Linux, every open network socket requires a file descriptor. Increasing this limit will ensure that lingering <code class="language-plaintext highlighter-rouge">TIME_WAIT</code> sockets and other consumers of file descriptors don’t impact our ability to handle lots of concurrent requests.</p>
  </li>
  <li>
    <p>Decrease the time that sockets stay in the <code class="language-plaintext highlighter-rouge">TIME_WAIT</code> state by lowering <code class="language-plaintext highlighter-rouge">tcp_fin_timeout</code> from its default of 60 seconds to 10. You can lower this even further, but too low, and you can run into socket close errors in networks with lots of jitter. We will also set <code class="language-plaintext highlighter-rouge">tcp_tw_reuse</code> to tell the kernel it can reuse sockets in the <code class="language-plaintext highlighter-rouge">TIME_WAIT</code> state.</p>
  </li>
  <li>
    <p>Increase the port range for ephemeral (outgoing) ports, by lowering the minimum port to 10000 (normally 32768), and raising the maximum port to 65000 (normally 61000). <strong>Important:</strong> This means you can’t have server software that attempts to bind to a port above 9999! If you need to bind to a higher port, say 10075, just modify this port range appropriately.</p>
  </li>
  <li>
    <p>Increase the read/write TCP buffers (<code class="language-plaintext highlighter-rouge">tcp_rmem</code> and <code class="language-plaintext highlighter-rouge">tcp_wmem</code>) to allow for larger window sizes. This enables more data to be transferred without ACKs, increasing throughput. We won’t tune the total TCP memory (<code class="language-plaintext highlighter-rouge">tcp_mem</code>), since this is automatically tuned based on available memory by Linux.</p>
  </li>
  <li>
    <p>Decrease the VM <code class="language-plaintext highlighter-rouge">swappiness</code> parameter, which discourages the kernel from swapping memory to disk. By default, Linux attempts to swap out idle processes fairly aggressively, which is counterproductive for long-running server processes that desire low latency.</p>
  </li>
  <li>
    <p>Increase the TCP congestion window, and disable reverting to TCP slow start after the connection is idle. By default, TCP starts with a single small segment, gradually increasing it by one each time. This results in unnecessary slowness that impacts the start of every request – which is especially bad for HTTP.</p>
  </li>
</ul>

<p>Ok, enough chat, more code.</p>

<h2 id="kernel-parameters">Kernel Parameters</h2>

<p>To start, edit <code class="language-plaintext highlighter-rouge">/etc/sysctl.conf</code> and add these lines:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># /etc/sysctl.conf</span>

<span class="c"># Increase system file descriptor limit</span>
fs.file-max <span class="o">=</span> 100000

<span class="c"># Discourage Linux from swapping idle processes to disk (default = 60)</span>
vm.swappiness <span class="o">=</span> 10

<span class="c"># Increase ephermeral IP ports</span>
net.ipv4.ip_local_port_range <span class="o">=</span> 10000 65000

<span class="c"># Increase Linux autotuning TCP buffer limits</span>
<span class="c"># Set max to 16MB for 1GE and 32M (33554432) or 54M (56623104) for 10GE</span>
<span class="c"># Don't set tcp_mem itself! Let the kernel scale it based on RAM.</span>
net.core.rmem_max <span class="o">=</span> 16777216
net.core.wmem_max <span class="o">=</span> 16777216
net.core.rmem_default <span class="o">=</span> 16777216
net.core.wmem_default <span class="o">=</span> 16777216
net.core.optmem_max <span class="o">=</span> 40960
net.ipv4.tcp_rmem <span class="o">=</span> 4096 87380 16777216
net.ipv4.tcp_wmem <span class="o">=</span> 4096 65536 16777216

<span class="c"># Make room for more TIME_WAIT sockets due to more clients,</span>
<span class="c"># and allow them to be reused if we run out of sockets</span>
<span class="c"># Also increase the max packet backlog</span>
net.core.netdev_max_backlog <span class="o">=</span> 50000
net.ipv4.tcp_max_syn_backlog <span class="o">=</span> 30000
net.ipv4.tcp_max_tw_buckets <span class="o">=</span> 2000000
net.ipv4.tcp_tw_reuse <span class="o">=</span> 1
net.ipv4.tcp_fin_timeout <span class="o">=</span> 10

<span class="c"># Disable TCP slow start on idle connections</span>
net.ipv4.tcp_slow_start_after_idle <span class="o">=</span> 0

<span class="c"># If your servers talk UDP, also up these limits</span>
net.ipv4.udp_rmem_min <span class="o">=</span> 8192
net.ipv4.udp_wmem_min <span class="o">=</span> 8192

<span class="c"># Disable source routing and redirects</span>
net.ipv4.conf.all.send_redirects <span class="o">=</span> 0
net.ipv4.conf.all.accept_redirects <span class="o">=</span> 0
net.ipv4.conf.all.accept_source_route <span class="o">=</span> 0

<span class="c"># Log packets with impossible addresses for security</span>
net.ipv4.conf.all.log_martians <span class="o">=</span> 1
</code></pre></div></div>

<p>Since some of these settings can be cached by networking services, it’s best to reboot to apply them properly (<code class="language-plaintext highlighter-rouge">sysctl -p</code> does not work reliably).</p>

<h2 id="open-file-descriptors">Open File Descriptors</h2>

<p>In addition to the Linux <code class="language-plaintext highlighter-rouge">fs.file-max</code> kernel setting above, we need to edit a few more files to increase the file descriptor limits. The reason is the above just sets an absolute max, but we still need to tell the shell what our per-user session limits are.</p>

<p>So, first edit <code class="language-plaintext highlighter-rouge">/etc/security/limits.conf</code> to increase our session limits:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># /etc/security/limits.conf</span>
<span class="c"># allow all users to open 100000 files</span>
<span class="c"># alternatively, replace * with an explicit username</span>
<span class="k">*</span>       soft    nofile  100000
<span class="k">*</span>       hard    nofile  100000
</code></pre></div></div>

<p>Next, <code class="language-plaintext highlighter-rouge">/etc/ssh/sshd_config</code> needs to make sure to use PAM:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># /etc/ssh/sshd_config</span>
<span class="c"># ensure we consult pam</span>
UsePAM <span class="nb">yes</span>
</code></pre></div></div>

<p>And finally, <code class="language-plaintext highlighter-rouge">/etc/pam.d/sshd</code> needs to load the modified <code class="language-plaintext highlighter-rouge">limits.conf</code>:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># /etc/pam.d/sshd</span>
<span class="c"># ensure pam includes our limits</span>
session required pam_limits.so
</code></pre></div></div>

<p>You can confirm these settings have taken effect by opening a new ssh connection to the box and checking <code class="language-plaintext highlighter-rouge">ulimit</code>:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span><span class="nb">ulimit</span> <span class="nt">-n</span>
100000
</code></pre></div></div>

<p>Why Linux has evolved to require 4 different settings in 4 different files is beyond me, but that’s a topic for a different post. :)</p>

<h2 id="tcp-congestion-window">TCP Congestion Window</h2>

<p>Finally, let’s increase the TCP congestion window from 1 to 10 segments. This is done on the interface, which makes it a more manual process that our <code class="language-plaintext highlighter-rouge">sysctl</code> settings. First, use <code class="language-plaintext highlighter-rouge">ip route</code> to find the default route, shown in bold below:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>ip route
default via 10.248.77.193 dev eth0 proto kernel
10.248.77.192/26 dev eth0  proto kernel  scope <span class="nb">link  </span>src 10.248.77.212
</code></pre></div></div>

<p>Copy that line, and paste it back to the <code class="language-plaintext highlighter-rouge">ip route change</code> command, adding <code class="language-plaintext highlighter-rouge">initcwnd 10</code> to the end to increase the congestion window:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span><span class="nb">sudo </span>ip route change default via 10.248.77.193 dev eth0 proto kernel initcwnd 10
</code></pre></div></div>

<p>To make this persistent across reboots, you’ll need to add a few lines of bash like the following to a startup script somewhere. Often the easiest candidate is just pasting these lines into <code class="language-plaintext highlighter-rouge">/etc/rc.local</code>:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">defrt</span><span class="o">=</span><span class="sb">`</span>ip route | <span class="nb">grep</span> <span class="s2">"^default"</span> | <span class="nb">head</span> <span class="nt">-1</span><span class="sb">`</span>
ip route change <span class="nv">$defrt</span> initcwnd 10
</code></pre></div></div>

<p>Once you’re done with all these changes, you’ll need to either bundle a new machine image, or integrate these changes into a system management package such as Chef or Puppet.</p>

<h2 id="additional-reading">Additional Reading</h2>

<p>The above settings were pulled together from a variety of other resources out there, and then validated through testing on EC2. You may need to tweak the exact limits depending on your application’s profile. Below are a few additional posts that make good reading:</p>

<ul>
  <li><a href="http://fasterdata.es.net/host-tuning/linux/">US Dept of Energy Guide to Linux TCP Tuning</a></li>
  <li><a href="http://russ.garrett.co.uk/2009/01/01/linux-kernel-tuning/">Linux tuning parameters used by Last.fm</a></li>
  <li><a href="http://www.frozentux.net/ipsysctl-tutorial/chunkyhtml/tcpvariables.html">Definitions of Linux TCP kernel variables</a></li>
  <li><a href="http://www.ncftp.com/ncftpd/doc/misc/ephemeral_ports.html">Understanding ephemeral ports</a></li>
  <li><a href="http://www.cdnplanet.com/blog/tune-tcp-initcwnd-for-optimum-performance/">In-depth post by CDN Planet on TCP slow start (with tests!)</a></li>
  <li><a href="http://research.google.com/pubs/pub36640.html">Google Research Paper Proposing a Default Congestion Window of 10 Segments</a></li>
  <li><a href="http://serverfault.com/questions/234534/is-it-dangerous-to-change-the-value-of-proc-sys-net-ipv4-tcp-tw-reuse">Determining a safe value for tcp_tw_reuse (ServerFault)</a></li>
  <li><a href="http://stackoverflow.com/questions/8893888/dropping-of-connections-with-tcp-tw-recycle">Dropping of connections with tcp_tw_recycle (StackOverflow)</a></li>
</ul>]]></content><author><name>Nate Wiger</name></author><category term="technology" /><category term="linux" /><category term="performance" /><category term="unix" /><summary type="html"><![CDATA[Linux distributions still ship with the assumption that they will be multi-user systems, meaning resource limits are set for a normal human doing day-to-day desktop work. For a high-performance system trying to serve thousands of concurrent network clients, these limits are far too low. If you have an online game or web app that’s pushing the envelope, these settings can help increase awesomeness.]]></summary></entry><entry><title type="html">💾 Replacing Macbook HD with an SSD</title><link href="/2012/12/01/replacing-macbook-hd-with-an-ssd/" rel="alternate" type="text/html" title="💾 Replacing Macbook HD with an SSD" /><published>2012-12-01T20:00:00+00:00</published><updated>2012-12-01T20:00:00+00:00</updated><id>/2012/12/01/replacing-macbook-hd-with-an-ssd</id><content type="html" xml:base="/2012/12/01/replacing-macbook-hd-with-an-ssd/"><![CDATA[<p>My poor little laptop hard drive had been whining and whimpering, so I upgraded it to an SSD. Turned out to be inexpensive and very DIY friendly, so here are my cliffs notes.</p>

<h2 id="step-1-choose-an-ssd">Step 1: Choose an SSD</h2>

<p><img src="/assets/images/mercury-extreme.jpg" alt="Mucho fasto SSD" /></p>

<p>The consensus is that <a href="http://www.nateware.com/macsales.com">Other World Computing (OWC)</a> makes the most Mac-compatible SSD’s. I went with the <a href="http://eshop.macsales.com/shop/SSD/OWC/Mercury_6G/">OWC Mercury Extreme Pro SSD</a>. 120GB cost me $149. If you have an older Macbook (pre-2011), or just want to save money, you can go with the slightly slower <a href="http://eshop.macsales.com/shop/SSD/OWC/Mercury_Electra_6G/">OWC Mercury Electra SSD</a> instead. I sprung for FedEx 2-day shipping for ~$10.</p>

<h2 id="step-2-buy-a-usb-drive-case">Step 2: Buy a USB Drive Case</h2>

<p>This is so you can attach the new drive to your laptop temporarily, to copy over your data. Needs to be a 2.5” SATA for the SSD, with a USB connection for the laptop. Amazon has the <a href="http://www.amazon.com/gp/product/B002JQNXZC/ref=oh_details_o00_s00_i00">Vantec NexStar 2.5-Inch SATA to USB 2.0 External Enclosure</a> for $7.99. Done.</p>

<h2 id="step-3-put-drive-in-case">Step 3: Put Drive in Case</h2>

<p>Open the NexStar drive case, and plug the OWC SSD into the connector. Close it up and attach it to your laptop via the USB cable. This step should seem very simple. If not, rethink continuing w/o help.</p>

<h2 id="step-4-optional-grab-a-beer">Step 4: (Optional) Grab a Beer</h2>

<p><a href="http://www.ratebeer.com/beer/drakes-denogginizer/30946/">Drake’s Denogginizer</a> goes well with upgrade-related tasks. Warning: With 22oz at 9.75%, the clock is now ticking.</p>

<h2 id="step-5-partition-the-drive">Step 5: Partition the Drive</h2>

<p><img src="/assets/images/disk-utility.png" alt="Disk Utility Window" /></p>

<p>Once you attach the drive, a window will popup saying something like “Unrecognized drive format”. Click the “Initialize” button to open up Disk Utility. You should see a screen like the one at right. Click the “Partition” button in the right pane, and do the following:</p>

<ol>
  <li>Create a partition with all the available space, named whatever you want. I called mine “SSD Boot HD”.</li>
  <li>Click “+” to add a partition named “Recovery HD” of at least 750 MB in size. This is required for OSX Lion, Mountain Lion, or later, or if you’re using FileVault (disk encryption).</li>
</ol>

<p>Both should be the default type of “Mac OSX Extended (Journaled)”. It’s important that the “Recovery HD” partition be second, because of restrictions on how Lion/Mountain Lion can and can’t resize boot partitions.</p>

<h2 id="step-6-clone-the-drive">Step 6: Clone the Drive</h2>

<p><img src="/assets/images/carbon-copy-cloner.png" alt="Carbon Copy Cloner" /></p>

<p>Download <a href="http://www.bombich.com/download.html">Carbon Copy Cloner</a> and install it. There’s a fully-functional 30-day trial so you can decide whether to purchase a license later. It’s a great program and worth supporting if possible.</p>

<p>When it first starts up, it’ll ask you if you want to see the “Quick Start Guide”. Say yes. It opens up instructions telling you exactly how to copy your existing hard drive to a new external drive.</p>

<p>All you do is select your existing drive on the left, probably “Macintosh HD”, and your new drive on the right (whatever you called it in Step 5), and click “Clone”.</p>

<p>You may get a popup saying something like, “Recovery HD partition does not contain the correct OS.” If so, follow the on-screen instructions to update it. I found CCC didn’t properly reset itself after this, so I had to exit, re-launch, and then click “Clone” again to start the clone.</p>

<h2 id="step-7-wait">Step 7: Wait</h2>

<p>Sip on your beer from Step 4.</p>

<h2 id="step-8-shutdown-mac-swap-drives">Step 8: Shutdown Mac, Swap Drives</h2>

<p>Once the clone is finished, shutdown and unplug the power cable. Pull the external drive out of the case, reversing Step 3. Then, follow <a href="http://macinstruct.com/node/407">these excellent instructions</a> to physically install the SSD in your Macbook. Requires a teeny tiny midget screwdriver.</p>

<h2 id="step-9-boot-mac-enjoy">Step 9: Boot Mac, Enjoy</h2>

<p>Everything should Just Work, although I did notice that some programs like Dropbox required me to reenter my email/password the first time. For fun, try clicking on a beastly program like Photoshop or Word and it should open up unnervingly fast.</p>]]></content><author><name>Nate Wiger</name></author><category term="technology" /><category term="laptop" /><category term="mac" /><summary type="html"><![CDATA[My poor little laptop hard drive had been whining and whimpering, so I upgraded it to an SSD. Turned out to be inexpensive and very DIY friendly, so here are my cliffs notes.]]></summary></entry><entry><title type="html">☢️ Atomic Rant Redux</title><link href="/2010/06/14/atomic-rant-redux/" rel="alternate" type="text/html" title="☢️ Atomic Rant Redux" /><published>2010-06-14T20:00:00+00:00</published><updated>2010-06-14T20:00:00+00:00</updated><id>/2010/06/14/atomic-rant-redux</id><content type="html" xml:base="/2010/06/14/atomic-rant-redux/"><![CDATA[<p>My <a href="/an-atomic-rant.html">atomic rant</a> has gotten a ton of traffic – more than I foresaw. Seems atomicity is a hot topic in the web world these days. Increasing user concurrency, coupled with more interactive apps, exposes all sorts of edge cases. I wanted to write a follow-up post to step back and look at a few more high-level concerns with atomicity, as well as some Redis-specific issues we’ve seen.</p>

<h2 id="know-your-actors">Know Your Actors</h2>

<p><img src="/assets/images/new-moon-official-cast.jpg" alt="new-moon-official-cast" /></p>

<p>In my <a href="/an-atomic-rant.html">original rant</a>, I used the example of students enrolling in online classes to illustrate why atomicity was crucial to operations with multiple actors. And speaking of actors, they’re an even better target analogy. You need to assume your actors are all going to try to jam through the audition door at the same time. What happens if they are all talking to the director at once? How many conversations can continue in parallel? If you’re careful, you can get away with one final gate at the end, which makes your life infinitely easier. That is, funnel everyone to a decision point, congratulate one person, then tell the others sorry.</p>

<p>Of course, if that funnel is too long, you’re going to piss off your users in a major way. If you’ve ever bought tickets from Ticketmaster, you’re familiar with this problem. Granted they’ve gotten much better over the years (which is saying something…), and this is partially due to embracing the Amazon <a href="http://blogs.msdn.com/b/pathelland/archive/2007/05/15/memories-guesses-and-apologies.aspx">guesses and apologies approach</a>. If you have 200 tickets left, a person can probably get one. But if you have 10 tickets left, they’re probably going to get screwed. If you can help with the user’s expectations (“less than 10 tickets left!”) then people are more likely to be forgiving.</p>

<p>In the world of online games, this translates to showing players the number of slots left in a game, but then handing the situation where there were 2 slots left but you were the third person to hit “Submit”. You <strong>always</strong> need to handle these errors, because there’s no way to completely eliminate race conditions in a networked application.</p>

<h2 id="recovering-from-hiccups">Recovering from Hiccups</h2>

<p><img src="/assets/images/isharescapsize.jpg" alt="isharescapsize" /></p>

<p>Sooner or later, your slick, smooth-running atomic system is going to have problems. Even if it’s well-engineered, you could have a large outage such as a system crash, datacenter failure, etc. Plan on it.</p>

<p>Using Redis to offload atomic ops from the DB yielded big performance benefits, but added fragility. You now have two systems that must stay in sync. If either one crashes, there’s the possibility that you’re going to have dangling locks for records that are ok, or vice-versa. So you need a way to clear them. In a perfect world with infinite time, you’d be able to engineer a self-detecting, self-repairing system that can auto-recover. Good luck with that. A cron job that deletes locks older than a certain time works pretty well for the rest of us.</p>

<p>It’s also a good idea to have a script you can run manually, in the event you know you need to reset certain things. For example, to handle the case where you know your Redis node went down, you could have a script that deletes all locks where the ID is &gt; the current max ID in the DB. Oracle and other systems have similar concepts built into their <a href="http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_lock.htm">native locking procedures</a>.</p>

<h2 id="troubleshooting-redis-is-a-pain">Troubleshooting Redis is a Pain</h2>

<p>Unfortunately, Redis is lacking in the way of tools because it is still young. There is the PHP <a href="http://code.google.com/p/redis-admin/">Redis Admin</a> app, but its development appears to have stalled. Beyond that it’s pretty much roll-your-own-scripts at this point. We’ve thought about developing a general-purpose Redis app/tool ourselves, but with the Redis 2.0 changes and <a href="http://antirez.com/post/vmware-the-new-redis-home.html">VMWare hiring Salvatore</a> the tools side is a bit “wait and see”.</p>

<p>So before you start throwing all of your critical data into Redis, realize it’s a bit black-box at this point (or at least, a really dark gray). I’m not a GUI guy personally – I prefer command-line tools due to my sysadmin days – but for many programmers, GUI tools help debugging <em>a lot</em>. You need to make sure your programmers working with Redis can debug it when you have problems, which means a bigger investment in scripts vs. just downloading <a href="http://wb.mysql.com/">MySQL Workbench</a> or <a href="http://www.oracle.com/technology/products/database/sql_developer/index.html">Oracle SQL Developer</a>.</p>

<h2 id="check-and-double-check">Check and Double-Check</h2>

<p>The last thing worth mentioning is this: Don’t trust your own app. Even if you have an atomic gate at the start of a transaction, do sanity checking at the end too. There are a few reasons for this:</p>

<ul>
  <li>The lock may have expired for some reason, and you didn’t test for this</li>
  <li>Your locking server may have crashed when you’re in the middle of a transaction</li>
  <li>There could be a background job overlapping with a front-end transaction</li>
  <li>Your software may have bugs (improbable, I know)</li>
</ul>

<p>For example, we had a background job that was using the same lock as a front-end service. This ended up being a design mistake, but it was difficult to track down because it happened very infrequently. The only way we found it was we had assertions that would get hit periodically on supposedly impossible conditions. Once we correlated the times with the background job running, we were able to fix the issue rather quickly.</p>

<p>So my opinion is this: Try to do the right thing, but if it screws up, apologize to the user, recover, and move on.</p>]]></content><author><name>Nate Wiger</name></author><category term="redis" /><category term="technology" /><category term="atomicity" /><category term="aws" /><category term="redis" /><category term="ruby" /><summary type="html"><![CDATA[My atomic rant has gotten a ton of traffic – more than I foresaw. Seems atomicity is a hot topic in the web world these days. Increasing user concurrency, coupled with more interactive apps, exposes all sorts of edge cases. I wanted to write a follow-up post to step back and look at a few more high-level concerns with atomicity, as well as some Redis-specific issues we’ve seen.]]></summary></entry></feed>