<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Boyleing Point</title>
    <link>https://lukeboyle.com</link>
    <atom:link href="https://lukeboyle.com/feed.xml" rel="self" type="application/rss+xml" />
    <description>Luke Boyle on technology, faith, and adjacent topics.</description>
    <language>en</language>
    <lastBuildDate>Sat, 09 May 2026 08:32:13 GMT</lastBuildDate>
    <item>
      <title><![CDATA[Our God, the painter]]></title>
      <link>https://lukeboyle.com/blog/our-god-the-painter/</link>
      <guid>https://lukeboyle.com/blog/our-god-the-painter/</guid>
      <pubDate>Mon, 14 Aug 2023 00:00:00 GMT</pubDate>
      <description><![CDATA[In celebrating the beauty of creation, I see God, and I celebrate my limitations.]]></description>
      <content:encoded><![CDATA[<p><img src="https://media.graphassets.com/uu0pjsUORmm1v3NJRILF" alt="IMG\_8935.JPG"></p>
<p>When I look at nature I see a glimpse of the infinite creativity of our Father. Not only did He create every species, down to the seemingly endless variations of trees, brush, mammals, and insects, but He has seen every distinct movement of every creeping thing, down to the blood pulsing through their veins and the fur on their backs swaying in the wind. He has seen the way every single photon cast off from the sun makes contact with the trees, diffusing through their leaves, and touching down on the ground for all of history <em>ad infinitum</em>.</p>
<p><img src="https://media.graphassets.com/R8INTc1jSGqlQjnEFZqa" alt="2023-01-27 19-51-44 (2).jpeg"></p>
<p>As I gaze at the sky, He gazes back. He is the light that makes contact with the photoreceptor cells in my eyes to reveal the majesty of creation. And every sunset that splashes across the sky, He has painted thoughtfully on an ever-shifting canvas. Yet to me, every day, in its own way, the sun sets more beautifully than I could possibly imagine. And in my limitation, every day is a mystery.</p>
<p><img src="https://media.graphassets.com/0yIwe78rTnyZLrps2aXL" alt="2023-01-05 19-02-17.jpeg"></p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[In search of humanity]]></title>
      <link>https://lukeboyle.com/blog/in-search-of-humanity/</link>
      <guid>https://lukeboyle.com/blog/in-search-of-humanity/</guid>
      <pubDate>Sun, 02 Apr 2023 00:00:00 GMT</pubDate>
      <description><![CDATA[When you look at the world today, is it really surprising that people are more aimless and depressed than in all of modern history? Though our material conditions are still leagues ahead of our forebears, we have been completely deracinated and given nothing to aspire to.]]></description>
      <content:encoded><![CDATA[<p>Having regrettably fallen prey to the allure of dating apps again, I am filled with a renewed sense of sorrow for the sheer number of people who, rather than seeming to express any genuine individual personality, appear to have been stamped out on an assembly-line and ejected from a factory. The most pervasive (although perhaps the most understandable) of these living tropes is the &quot;traveller&quot; persona. For example the prompt on Hinge &quot;What I&#39;m looking for&quot; is more often than not answered with &quot;Someone to travel with&quot;. I&#39;m not against travelling, but for large swathes of people in our wealthy, decadent society, travelling, instead of a means of pursuing enriching experience, has become an end in itself. This phenomenon is epitomised by the Contiki tour or in &quot;Cruising&quot;.</p>
<p>For modern man, travelling to another country typically offers no transcendent experience beyond drinking, eating, and taking the same photo of the Leaning Tower of Pisa that tens of millions of people have taken before you. This ritual is no different to me than if you just went to a local district themed after your culture of choice, and you ate at a restaurant where they spoke their native tongue to you, and then you step outside and stick your head through a photo stand-in and check off the photo in front of your attraction of choice. The only real difference is some abstract notion that you were in the correct geographic location, therefore you may now tick the country off your list. The article <em><a href="https://www.alexmurrell.co.uk/articles/the-age-of-average">The age of average</a></em> describes a creeping void of homogeneity using Airbnb and cafe interior design (see fig. 1 below) to illustrate how, all over the world, interior designers have unconsciously agreed upon a globally homogenous style-guide which affords well-to-do individuals the ability to travel to the other side of the world and see nothing new. Sure, you may go to a busy street market in Phnom Penh and see all sorts of people living drastically different lives to you, but at the end of the day you&#39;ll retire to a luxurious villa and forget all about the poverty surrounding you.</p>
<p><img src="https://media.graphassets.com/7C3laC1TLCv0pPbcY7jZ" alt="The+Age+of+Average\_0009\_Interiors+-+Homes (1).jpg"> Figure 1: The age of average</p>
<p>Evola&#39;s <em>Meditations on the Peaks</em> captures this perfectly in the context of mountain climbing which had, in his view, been corrupted and trivialised as simply another vain pursuit of hedonism. &quot;we cannot help but notice the presence among our young people, of love for risk and even of heroism. [...] mountain climbing, when experienced only in keeping with this view, would not be easily distinguished from the pursuit of emotions for their own sake&quot;. Evola continues, &quot;This pursuit of radical sensations generates all kinds of extravagant and desperate feats and bold acrobatic activities [...] All things considered, these things do not differ very much from other excitements or drugs, the employment of which suggests the absence rather than the presence of a true sense of personality&quot;. To Evola, the spiritual majesty of the mountain from days of antiquity arose from their inaccessibility. Virtually all ancient civilisations situated around mountains saw the mountains as possessing some essence of immortality, as they conceived of the mountain as a separate plane of existence.</p>
<p>Technological advancements that make it easier to summit a mountain cheapen the experience. Now today, when virtually every mountain has been conquered, and we have helicopters and drone footage of the peaks, the mystery has been completely devoured by the machines of modernity. Consider the case of Tabletop Mountain in Toowoomba. In 2017 there was a proposal to build a <a href="https://www.thechronicle.com.au/news/tourism-cable-car-to-table-top-would-put-city-on-map/news-story/09128af6f823f9045d3b6d00c173f211">cable car</a> across from Picnic Point to the mountain. Tabletop Mountain is a tough but manageable climb for your average able-bodied person, but you still have to make a physical commitment to reach the top. Today, you can climb tabletop and find yourself completely alone, slightly closer to God, and able to look across the rolling hills at an ever-widening horizon. By making it accessible to everyone, you inevitably destroy what little mystique remains.</p>
<p>This brings us back to the &quot;traveller&quot; persona. I cannot blame, nor judge these people. After all, what spiritually transcendent activities can your average man really engage in today? The potential for enrichment has been sucked out of virtually every activity, and we are told there is no spiritual aspects to pursue within ourselves. You shouldn&#39;t have children because it&#39;s unaffordable, or because it&#39;s bad for the environment. You shouldn&#39;t pursue God, because that&#39;s for un-developed Neanderthals who haven&#39;t yet heard the gospel of Science. The only option presented to these people to achieve fulfilment is travelling, and it makes sense, because there is still a notion of triumph in travelling. You cross vast oceans in a matter of hours, whereas your ancestors - if they were even able to travel - would be crammed into the hull of a ship for weeks or months just to see one new country. Unfortunately for these people, when they make it overseas, they&#39;ll typically find many of the same trappings as they are accustomed to at home (see below for an image of an &quot;English&quot; street vandalised with American fast-food chains). In my own life, the last overseas travel I did was to Cambodia in 2018, and after I went to the Killing Fields and heard the harrowing tale, I returned to my hotel and on the same street was a Burger King and Cold Stone Creamery.</p>
<p><img src="https://media.graphassets.com/RO9SZaLLTUqB1sk0XJrL" alt="5495119178\_c4921e6bc6\_o-scaled (1).jpg"> Figure 2: An &quot;English&quot; high street</p>
<p>The end result of all of this is the complete commoditisation of spirituality; a sort of drive-thru baptism where people are told that enrichment of the soul is yet another product to be consumed, rather than a lifelong pursuit within yourself. People are sold the idea that to be enriched, all you need to do is buy a ticket to see the sunrise at Angkor Wat and you&#39;ll be whole. Given enough time, I believe that even this level of spiritual enrichment is going to be made impossible for the average man (due to climate restrictions and the theft of their discretionary income by central banks) and travel may again be reserved for the social elite. Perhaps then our people will once again begin to look within.</p>
<blockquote>
<p>Those who are irresistibly attracted to the mountains have often only experienced in an emotion a greatness that is beyond their understanding. They have not learned to master a new inner state emerging from the deepest recesses of their beings. Thus, they do not know why they seek increasingly wider horizons, freer skies, tougher peaks; or why, from peak to peak, from wall to wall, and from danger to danger, through their experiences they have become inexplicably disillusioned with everything that, in their ordinary lives, appeared to them as most lively, important, and exciting</p>
</blockquote>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[I must humble myself]]></title>
      <link>https://lukeboyle.com/blog/i-must-humble-myself/</link>
      <guid>https://lukeboyle.com/blog/i-must-humble-myself/</guid>
      <pubDate>Fri, 24 Mar 2023 00:00:00 GMT</pubDate>
      <description><![CDATA[A prayer for humility, and a celebration of my insufficiency.]]></description>
      <content:encoded><![CDATA[<blockquote>
<p>Two men went up to the temple to pray, one a Pharisee and the other a tax collector. The Pharisee stood by himself and prayed: ‘God, I thank you that I am not like other people—robbers, evildoers, adulterers—or even like this tax collector. I fast twice a week and give a tenth of all I get.’ But the tax collector stood at a distance. He would not even look up to heaven, but beat his breast and said, ‘God, have mercy on me, a sinner.’ I tell you that this man, rather than the other, went home justified before God. For all those who exalt themselves will be humbled, and those who humble themselves will be exalted.</p>
</blockquote>
<p>Luke 18:9-14</p>
<p>You must humble yourself. For all those who exalt themselves will be humbled, but the one who humbles himself will be exalted. Death is the ultimate sign of spiritual rebirth, and to die in the name of God would surely be the greatest honour a Christian can attain.</p>
<p>For my money, the crucifixion is the greatest form of humbling one could experience. Not only are you brutally executed and displayed as a warning to would-be &quot;law&quot;breakers, those who purport to be your brothers and sisters will rally around in Lemming-like obedience and celebrate your execution. After all, your fate has been determined by our god, the State.</p>
<p>For the Son of God to willingly humble himself to this degree, in the face of utter betrayal from his people, in the face of false accusations, who are you to have any pride in your insufficient self or your own abilities? If the time ever comes that we should choose our faith or death, Lord please give me the strength to choose You in the face of death. Father, into your hands I commit my spirit.</p>
<p>Amen.</p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[Book review: Mencken's Conservatism by Benjamin Marks]]></title>
      <link>https://lukeboyle.com/blog/mencken-conservatism-book-review/</link>
      <guid>https://lukeboyle.com/blog/mencken-conservatism-book-review/</guid>
      <pubDate>Wed, 21 Jul 2021 00:00:00 GMT</pubDate>
      <description><![CDATA[Book review: Mencken's Conservatism by Benjamin Marks]]></description>
      <content:encoded><![CDATA[<p>Mencken&#39;s Conservatism (2012), was written by Benjamin Marks, editor-in-chief for <a href="http://economics.org.au/">http://economics.org.au/</a>. Like most Australian authors, Marks doesn&#39;t get enough attention, so please, support the author.</p>
<p>This primer on Mencken&#39;s philosophy was quite profound, and, sadly, very underappreciated. I feel that this is an important perspective to re-frame the debate around statism versus a free society. The author shows that Mencken is not really a cynic, but a realist. As Mencken said, &quot;Reconciling ourselves to the incurable swinishness of government, and to the inevitable stupidity and roguery of its agents, we discover that both stupidity and roguery are bearable - nay, that there is in them a certain assurance against something worse.&quot;</p>
<p>Indeed, his writings didn&#39;t bring about a free society - in fact, he correctly predicted that government would continue to grow at an exponential rate after his death. Advocating for the abolition of the state (or even the greater utopian vision of a limited state) is like trying to steer a cruise ship with an oar. So, how did Mencken work for a lifetime and still carry on with relative happiness? He didn&#39;t write to persuade. The author notes, &quot;Writing to persuade can leave you with many peculiar stances. But writing to express your libertarian beliefs is a much more straightforward enterprise, and your writing is then relevant forever and won&#39;t come back to haunt you&quot;.</p>
<p>This makes me think of the modern Conservative whose current platform generally resembles the progressive platform of yesterday. It&#39;s an eternal game of rugby where the progressives charge ahead, and the conservatives celebrate a successful tackle without noticing they&#39;ve ceded ground. When the progressives say, &quot;we want $3 trillion in equitable infrastructure spending&quot;, if your response is to say &quot;Let&#39;s compromise. How about $1.5 trillion?&quot;, you have already lost the debate. You tacitly admit that some government spending is good. If some government spending is good, then you obviously can&#39;t have too much of a good thing, so why stop at $1.5tn?. You are attempting to persuade the progressive to your position that government spending is evil by agreeing to government spending. Instead, you should argue from the principle that all government spending is necessarily funded by theft at gunpoint and therefore any concession is unconscionable.</p>
<h2>Conservatism in 2021</h2>
<p><img src="https://media.graphcms.com/CO20W1l4Sl7pqEUHBCo1" alt="conservatism in 2021"></p>
<p>The book has shown me that I have been far too utopian in discussions about free societies. Rather than listing all the ways that a free society will be better for the individuals within it - given that this is entirely subjective, (and many people find a great deal of comfort in being subordinate to the coercive monopoly of the state) - it is far more productive to argue from first principles. You may not be liked, but you will be authentic, and that is far more important in the long term. No amount of concession from you will make a free society any more likely. You&#39;ll either be hated for adhering to your principles, or you&#39;ll be forgotten because you abandoned them.</p>
<blockquote>
<p>&quot;The fraud of democracy, I contend, is more amusing than any other... All its axioms resolve themselves into thundering paradoxes, many amounting to downright contradictions in terms. The mob is competent to rule the rest of us - but it must be rigorously policed itself. There is a government, not of men, but of laws - but men are set upon benches to decide finally what the law is and may be [...] I confess, for my part, that it greatly delights me. I enjoy democracy immensely. It is incomparably idiotic, and hence incomparably amusing.&quot;</p>
<p>H. L. Mencken</p>
</blockquote>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[Book review: How an Economy Grows and Why it Crashes (2010)]]></title>
      <link>https://lukeboyle.com/blog/how-an-economy-grows-why-it-crashes-review/</link>
      <guid>https://lukeboyle.com/blog/how-an-economy-grows-why-it-crashes-review/</guid>
      <pubDate>Tue, 08 Jun 2021 00:00:00 GMT</pubDate>
      <description><![CDATA[Book review: How an Economy Grows and Why it Crashes (2010)]]></description>
      <content:encoded><![CDATA[<p>How an Economy Grows and Why it Crashes (henceforth known as &quot;the book&quot;) is written by Peter and Andrew Schiff. The book is an allegory for the economy based on a fantasy island where every man needs one fish a day to be satisfied. The book outlines the ways the inhabitants of the island increase their productive capacity using capital investment and under-consumption in a very approachable and easy to understand manner. Peter and Andrew Schiff borrowed the central allegory from a book written by their father Irwin Schiff (revered tax protestor who wrote one of only two books to be banned in America called The Federal Mafia), entitled How an Economy Grows and Why it Doesn&#39;t. Having not read the original, I have to assume that the primary differentiating factor between the books is that the adaptation includes an explanation for the cause and the aftermath of the 2008 housing crisis.</p>
<p>The book is very entertaining, and it&#39;s a very easy read (took me probably 5 hours, and I&#39;m a particularly slow reader). I went through with a highlighter and emphasised the key points, but I found that as I got to the middle of the book, the insights started to dry up. As I got to Chapter 8 (A rebublic is born), it started to drag. I suspect - though I may be wrong - that this is approximately where the original allegory of Able, Baker, and Charlie growing the economy ended, and where the younger Schiff&#39;s original portion began. It was still entertaining, but unlike the start which was packed with easy to understand explanations for economic principles, the middle part leading up to the housing crisis was mostly a rushed re-telling of history with a healthy helping of fish puns.</p>
<p>When the authors got past the historical portion and into the future, the book did read better, and it ended very strongly. As this book came out in 2010, the authors envisioned a future where America had to face the music, and Obama took responsibility for his economic policy blunders. Unfortunately, with hindsight, we know that sort of happy ending is rare in politics. Obama and the Fed&#39;s policies became worse, which Trump then inherited, and continued. Ten years after publish, the crash described in the book hasn&#39;t arrived as the authors expected, but given the state of the American economy it seems more likely by the year.</p>
<p>Here&#39;s some key takeaways from the book:</p>
<h2>Demand and consumption do not equate to economic growth</h2>
<p>(After increasing the productive capacity of the island; that is, catching more fish) &quot;This didn&#39;t happen because the three guys were unsatisfied with their limited lifestyle. Their hunger, which is labeled &quot;demand&quot; in economic terms, was necessary to spur economic growth but not sufficient to achieve it.&quot;</p>
<p>&quot;With their extra fish, the islanders can finally eat more than one fish per day. But the economy didn&#39;t grow because they consumed more. They consumed more because the economy grew.&quot;</p>
<h2>Denying loans with no pay-off</h2>
<p>(About Able giving a loan to someone to take a holiday) &quot;Not only would such a transaction put his savings at unnecessary risk, but it would mean that the capital would be unavailable for more productive loans.&quot;</p>
<p>&quot;In actuality, loans to consumers that do not fundamentally improve productive capacity are a burden to both the lenders and the borrowers.&quot;</p>
<h2>Falling prices are good for everyone</h2>
<p>&quot;Steadily dropping prices also encourage savings as islanders begin to understand that their fish would likely buy more goods in the future than they do in the present.&quot;</p>
<p>Keynesians react to falling prices like a vampire reacts to a crucifix. Such a reaction is understandable when you realise that their theories are predicated on the idea that spending (i.e. consumption) equates to economic growth. This is why their primary course of action when faced with an economic contraction is monetary stimulus. Inflation is the best way to ensure people spend what they make, because if people know prices are going to rise, they are more likely to spend their money on goods they&#39;ll need in the future.</p>
<p>I&#39;d suggest this book for people with a cursory interest in economics but without much of a background. It&#39;s quite easy to grasp and would be good for young high school students.</p>
<p>I give it a 6/10.</p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[Protecting yourself from inflation]]></title>
      <link>https://lukeboyle.com/blog/protecting-yourself-from-inflation/</link>
      <guid>https://lukeboyle.com/blog/protecting-yourself-from-inflation/</guid>
      <pubDate>Sat, 24 Apr 2021 00:00:00 GMT</pubDate>
      <description><![CDATA[Protecting yourself from inflation]]></description>
      <content:encoded><![CDATA[<p>Your currency is worthless.</p>
<p>When priced in gold, you can see that over the last hundred years (see the graph below), the US Dollar has lost all of its value. The graph below plots the price of gold per ounce against the USD from 1915 to present. In 1915 the price of an ounce of gold was $19.25. When the dollar was created in 1792, the cost of gold was $18.60 per ounce, which we will refer to as the baseline value of the dollar. Between 1792 and 1915 (123 years) the price of gold only increased by 65 cents (roughly half a cent per year in inflation or an average of 0.028% p.a.). During this period, wages were relatively flat, however, America also became heavily industrialised, and the cost of living reduced by half. So, not only did the value of your money remain flat, you were able to buy more goods.</p>
<p><img src="https://media.graphcms.com/MCjJwlW6RoXMezNwPKLg" alt="gold-to-usd.png"></p>
<p>You&#39;ll note that the price of gold was flat until ~1932 when the government decided that it needed to devalue the currency, so it could create more dollars (since they were on a gold standard, they could only have as many dollars as you had gold reserves), so the legislature re-defined the value of an ounce of gold to be $35. Back then, it was a simple change of definition, since the dollar was still tied to, and redeemable in gold and silver. You&#39;ll notice that all hell breaks loose in 1971 when America left the gold standard (duping the entire world into accepting fake money tied to no real world value for their exports), and people were no longer allowed to redeem gold for their dollars (including foreign investors). Between 1971 and today, the price of gold rose from $36.56 to $1715.24 (as of March 2021). That is a face-melting 4591.56% inflation rate in 50 years, or an average of 91.8% per year.</p>
<p>The graph begins shortly after 1913 when the Federal Reserve Act was passed (creating the Federal Reserve), and the Sixteenth Amendment was ratified (allowing for the government to tax income). The Federal Reserve was intended to be an apolitical, non-government organisation to allow the creation of money to be separate from the legislature. That seems ridiculous in hindsight today, as the Fed and the legislature are just two sides of the same spending-addicted coin, hell-bent on debasing the currency no matter the cost. This isn&#39;t anything new; the Fed has been monetising the government&#39;s debt since the inception, but I think the wheels came off when legislature made the decision to leave the gold standard to fund their war machine abroad. The few checks and balances that previously existed evaporated. Before the legislature could vote to devalue the dollar, allowing the Fed to print more money, but after leaving the gold standard, the Fed now digitally prints tens of billions of dollars per month to buy treasury bonds to fuel the stock market bubble. Similarly, the income tax was another ill-fated policy. Originally, the income tax rate was 1% for people earning $0 to $20,000 (which is ~$529,911 today, according to the CPI, or $1,782,067 when priced in gold) with a top nominal tax rate of 7%. Clearly, in hindsight, these low rates were the camel&#39;s nose under the tent, and this was the government laying the groundwork for massive tax hikes during World War I (yet another war America didn&#39;t need to be involved in). By 1918, the top nominal tax rate was a whopping 77%.</p>
<p>All of that background is just to highlight the state of decay America is in which continues to accelerate as government grows. The reason I like to view inflation through the lens of gold price is because gold has been used as money for thousands of years, which is a much more meaningful time-scale than the ~240 years the USD has been in existence. Interestingly, when you price the Dow Jones index in dollars, it&#39;s at a record high of $32,981.55 (compared to the $1457.37 in 1915), which is an increase of 2163%, however, when you price it in ounces of gold, today it&#39;s at 19.23Oz (as compared to the 2.86Oz in 1915), that&#39;s an increase of 572.4%. A far cry from the &gt;2,000% when measured in dollars.</p>
<p>Now that you&#39;re caught up on the historical horrors of the US dollar, we can talk about the present horrors. 40% of all dollars in existence were printed in 2020, and already in 2021 there has been nearly half a trillion dollars added to the national debt. The CPI is the measure of inflation we typically use (which is actively manipulated to understate the true increase in goods prices), and in March 2021 alone, it measured a 2.6% increase. It is no longer a conversation of &quot;massive inflation is coming&quot;. Massive inflation is HERE! Look at the price of lumber for example (see below), which has had a 47% increase so far this year, after a 125% increase last year.</p>
<p><img src="https://media.graphcms.com/7gsphVSiiZVerhhEvjw6" alt="lumber-prices.png"></p>
<p>It should be clear from the preceding rant that I believe gold is the best way to hedge yourself against inflation, especially since, unlike wheat and oil, it never decays. Remember, it&#39;s not that gold is getting more expensive, it&#39;s that the dollar is getting weaker. So, trading precious metals for commodities to hedge yourself against currency fluctuations is the best course of action. What should you do if you don&#39;t have the available capital to buy gold? Firstly, you can buy much larger quantities of silver for far lower prices than you can buy gold, and it&#39;s more viable for everyday exchanges because it&#39;s more divisible.</p>
<p>Assuming you can&#39;t buy silver either, you should not let your money wither away at 0% interest in the bank or risk losing it on overpriced stocks (I&#39;m not advising you pull your retirement funds out of the market, but you could consider mixing in some inflation hedges like gold and gold mining stocks). The best thing you can do locally is stock up on goods that you know you will need down the road. In 2017 Mark Cuban (Newly converted Bitcoin bull) said that people struggling to get ahead should buy in bulk and on sale. In hindsight, this was fairly prophetic considering where we are today. Many goods on the shelves are experiencing unprecedented price surges, and that doesn&#39;t even address the real possibility of serious goods shortages in the near future. Start working out how much of each non-perishable good you use per month and extrapolate that for a year. Here are some ideas for you:</p>
<h2>Toilet paper</h2>
<p>If you use 4 rolls a month, that works out to 48 per year. Buy four 24 packs and you are set for two years without buying toilet paper. Make sure you store it in a cool, dry place.</p>
<h2>Laundry detergent</h2>
<p>This is rather self-explanatory, but you should be wary that powdered detergent only has a shelf life of 6 months, so consider using liquid detergent.</p>
<h2>Toothpaste</h2>
<p>Colgate recommends a maximum of two years shelf life, so don&#39;t buy too much</p>
<h2>Other examples</h2>
<p>You can really go hell for leather with this, just consider if you have the right conditions in your house to store them long term.</p>
<ul>
<li>Toothbrushes</li>
<li>Tissues (or switch to washable cloth handkerchiefs)</li>
<li>Dehydrated meals</li>
<li>First aid supplies</li>
<li>Powdered milk</li>
</ul>
<p>If you are curious about getting started buying precious metals, check out <a href="https://schiffgold.com/">Schiff Gold</a> for Americans, and <a href="https://www.perthmint.com/">Perth Mint</a> or <a href="https://www.abcbullion.com.au/">ABC Bullion</a> for Australians. If you are worried about inflation, you should steer clear of buying gold/silver ETFs as you don&#39;t have the security of physical metal, and it can be easily seized by the government. With a gold broker like the above, you can store it at their secure facilities and request redemption at any time. I don&#39;t keep any physical metals as I don&#39;t have anywhere to securely store it, but a bank safe deposit box would be a good alternative.</p>
<p>Good luck out there everyone.</p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[Sitemaps for Next.js static sites with dynamic routes]]></title>
      <link>https://lukeboyle.com/blog/sitemaps-for-next-js-static-sites/</link>
      <guid>https://lukeboyle.com/blog/sitemaps-for-next-js-static-sites/</guid>
      <pubDate>Sat, 15 Aug 2020 00:00:00 GMT</pubDate>
      <description><![CDATA[Sitemaps for Next.js static sites with dynamic routes]]></description>
      <content:encoded><![CDATA[<p>I just recently re-built my Gatsby site using Next.js. I liked Gatsby for a while, however, I had a few issues:</p>
<ul>
<li><p>the build process has always been dodgy for me,</p>
</li>
<li><p>the watch (i.e. <code>gatsby start</code>) failed after being up for a while</p>
</li>
<li><p>builds didn&#39;t work on Windows Linux Subsystem</p>
</li>
<li><p>overburdened with configuration modules</p>
<p><img src="https://media.graphcms.com/ysHnBGPAQLGssV0cYfyF" alt="Google&#39;s lighthouse audit result shows 99 for performance, 100 for accessibility and best practices"></p>
<pre><code>The Lighthouse audit results after my first round of changes
</code></pre>
</li>
</ul>
<p>The biggest selling point for me is the <code>getStaticPaths</code> function in the <a href="https://nextjs.org/docs/basic-features/pages#scenario-2-your-page-paths-depend-on-external-data">Next.js pages</a>. Before, as a pre-build step, I was generating the entire page tree of React components using a node script. Super heavy handed, and I&#39;m sure there&#39;s better ways to do it in Gatsby. What I&#39;m doing now looks like this:</p>
<pre><code>.
└── pages
    └── blog-posts
        └── [year]
            └── [month]
                └── [title].tsx
</code></pre>
<p>The resulting output is visible in the address bar in your browser. Blog posts routes look like: <code>/blog-posts/2020/08/some-name</code></p>
<p><code>[title.tsx]</code></p>
<pre><code class="language-typescript">export function Post() {}

export async function getStaticPaths() {
    const blogPosts = await getBlogPosts();

    const paths = blogPosts.map(
        post =&gt; `/blog-posts/${post.year}/${post.month}/${post.title}`
    );

    return { paths, fallback: false };
}
</code></pre>
<p>In the <code>getStaticPaths</code> function you return a list of new paths and Next.js automatically spits those pages out. At build time, you can then use the path parameters to fetch external data and build your components. What this means, in effect, is that your <code>/pages</code> folder no longer maps 1:1 to the static output. So you can&#39;t just build a sitemap off the page directory anymore.</p>
<p>There&#39;s a comprehensive article on the topic by Lee Robinson (<a href="https://leerob.io/blog/nextjs-sitemap-robots">https://leerob.io/blog/nextjs-sitemap-robots</a>) but this guide also assumes your source pages are 1:1 with the expected output. I adapted his script to build based off the folder output instead.</p>
<ol>
<li>Download required dependencies (square brackets denote optional dependencies)</li>
</ol>
<p><code>yarn add -D glob [chalk] [prettier]</code></p>
<ol start="2">
<li>Create sitemap script</li>
</ol>
<pre><code class="language-javascript">import glob from &#39;glob&#39;;
import fs from &#39;fs&#39;;
import { red } from &#39;chalk&#39;;
import prettier from &#39;prettier&#39;;
import prettierConfig from &#39;./.prettierrc.js&#39;;

(() =&gt; {
    // default next js output is `out`
    // all the pages are guaranteed to be html
    glob(&#39;./out/**/*.html&#39;, (err, files) =&gt; {
        // If there&#39;s no files in the output, a build probably hasn&#39;t been run
        if (!files.length) {
            console.error(red(&#39;Could not find output directory&#39;));
            process.exit(1);
        }

        const sitemap = `
        &lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
        &lt;urlset xmlns=&quot;http://www.sitemaps.org/schemas/sitemap/0.9&quot;&gt;
            ${files
                .map(page =&gt; {
                    const path = page.replace(&#39;./out&#39;, &#39;&#39;).replace(&#39;.html&#39;, &#39;&#39;);
                    const route = path === &#39;/index&#39; ? &#39;/&#39; : path;

                    return `
                        &lt;url&gt;
                            &lt;loc&gt;${`https://{Your Domain Here}${route}/`}&lt;/loc&gt;
                            &lt;changefreq&gt;daily&lt;/changefreq&gt;
                            &lt;priority&gt;0.7&lt;/priority&gt;
                        &lt;/url&gt;
                    `;
                })
                .join(&#39;\n&#39;)}
        &lt;/urlset&gt;
    `;

        // Optional: you can remove this block if you aren&#39;t using prettier
        const formatted = prettier.format(sitemap, {
            ...prettierConfig,
            parser: &#39;html&#39;
        });

        fs.writeFileSync(&#39;./out/sitemap.xml&#39;, formatted);
    });
})();
</code></pre>
<ol start="3">
<li>Add script to <code>package.json</code></li>
</ol>
<pre><code class="language-json">{
    &quot;scripts&quot;: {
        &quot;start&quot;: &quot;next start&quot;,
        &quot;build&quot;: &quot;next build &amp;&amp; yarn run build:sitemap&quot;,
        &quot;build:sitemap&quot;: &quot;node ./generate-sitemap.js&quot;
    },
    &quot;devDependencies&quot;: {
        &quot;chalk&quot;: &quot;^4.1.0&quot;,
        &quot;fs-extra&quot;: &quot;^6.0.1&quot;,
        &quot;glob&quot;: &quot;^7.1.3&quot;,
        &quot;prettier&quot;: &quot;^1.18.2&quot;
    }
}
</code></pre>
<p>That&#39;s pretty much it for my implementation. You can see my sitemap here <a href="https://lukeboyle.com/sitemap.xml">https://lukeboyle.com/sitemap.xml</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[Do not trust Google]]></title>
      <link>https://lukeboyle.com/blog/do-not-trust-google/</link>
      <guid>https://lukeboyle.com/blog/do-not-trust-google/</guid>
      <pubDate>Sat, 01 Aug 2020 00:00:00 GMT</pubDate>
      <description><![CDATA[Do not trust Google]]></description>
      <content:encoded><![CDATA[<p><img src="https://media.graphcms.com/ikcRIQkhR6Gk8Boaa4Dn" alt="google-fasc.jpg"></p>
<p>I realise that this is one of the most well-explored topics on the privacy-conscious edges of the internet, but seriously... Do not trust Google. Facebook seems to be our current punching bag of choice because of their supposed ability to manipulate political opinion, but in my opinion Google is a much more insidious company with far greater potential for abuse. Google is the <a href="https://www.investopedia.com/news/facebook-google-digital-ad-market-share-drops-amazon-climbs/">largest advertising platform</a> by a significant margin (accounting for 36.3% of advertising in the U.S. with Facebook trailing at 19.3%). At the end of the day, if you delete your Facebook account, what are you really missing out on?</p>
<p>Google (or more specifically Alphabet Inc.) owns the largest search engine (Google.com), the largest video streaming platform (Youtube), and the most-used smartphone operating system (Android). You might ask, &quot;What&#39;s wrong with that? Sounds like they&#39;re just very successful at what they do&quot;. Well, let&#39;s break down those three markets (Search, Streaming, Mobile).</p>
<p><img src="https://media.graphcms.com/o8Gi7AdvScqL0ooh35Dq" alt="The classic Google slogan &quot;Don&#39;t be evil&quot;, except the end of the &quot;don&#39;t&quot; is crossed out so it says &quot;Do be evil&quot;"></p>
<h2>Search</h2>
<p>Google&#39;s estimated market share for search traffic globally is 92.16% <a href="https://gs.statcounter.com/search-engine-market-share">source</a>. As people increasingly are using search to navigate the web (as opposed to typing a URL into the address bar), this traffic increases, those people see more ads, Google makes more money. Google then uses this money to purchase exclusivity agreements with the likes of Apple (just two years ago it was announce that Google would be paying $12 billion US to Apple to remain the default search engine on Safari in 2019 <a href="https://fortune.com/2018/09/29/google-apple-safari-search-engine/">source</a> at a cost of roughly $10 per user).</p>
<h3>How does Google abuse Search?</h3>
<p>If you ask the average user how Google search works, they&#39;d probably say &quot;it just searches for your search term across the web&quot;, but what they probably don&#39;t know is that is just the tip of the iceberg. Other dimensions of search include:</p>
<ul>
<li>what kind of users go to that websites (and does the user searching fit that profile)?</li>
<li>how much traffic does the website get?</li>
<li>how relevant the content is to the search term (SEO magic)?</li>
<li>and, most importantly, does this website fit an acceptable narrative?</li>
</ul>
<p>There&#39;s certainly an argument to be made for suppressing some search results, such as pro-authoritarian sites (e.g. communist or fascist), extremely fringe conspiracy, illegal pornography, or bomb-making instructions. Advertisers probably don&#39;t want their ads next to those results. Rightly or wrongly, Google is already suppressing content from such websites (though, they&#39;re probably still indexing them).</p>
<p>If Google can suppress fascist content from sites like Stormfront (prominent white-supremacy forum), then who is to say which content they can or cannot suppress? Breitbart is a well-known right wing news site that has had their content <a href="https://www.breitbart.com/tech/2020/07/28/election-interference-google-purges-breitbart-from-search-results/">almost entirely purged</a> from Google search results (as evidenced by the &quot;search engine visibility&quot; chart below).</p>
<p><img src="https://media.graphcms.com/UB5sd9dUTfS1XKMbKAFN" alt="Search engine visibility index for Breitbart.com shows significant increase in visibility leading up to the 2016 presidential election with a sharp drop in mid 2017"></p>
<p>You don&#39;t have to agree with them politically to see that Google is applying different standards to conservative content than to more liberal content. I don&#39;t visit Breitbart, I don&#39;t read their articles, and frankly I don&#39;t give a shit what they have to say, but I believe in a free and open internet. If you believe in a free and open internet then you have to agree this is wrong. During the cold war, anyone who didn&#39;t follow the extreme protectionist beliefs of the time <a href="https://www.history.com/topics/cold-war/red-scare">were shouted down as communists</a> (Even Martin Luther King Jr. was dismissed as a communist by J. Edgar Hoover). This same thing is happening now, but the buzz word is different. The new weaponised word is &quot;Nazi&quot;. If time had elapsed differently, I have no doubt that it would be left-wing websites suppressed in search results, and that still wouldn&#39;t be okay.</p>
<p>There&#39;s plenty of evidence to suggest that Google is manually making these decisions to block conservative websites, however, Alphabet CEO Sundar Pichai denied that they manually censor websites at the recent <a href="https://www.theguardian.com/technology/2020/jul/29/tech-hearings-facebook-mark-zuckerberg-amazon-jeff-bezos-apple-tim-cook-google-sundar-pichai-congress">Congressional antitrust hearing</a> except for in cases where there are legal requirements or copyright issues. I don&#39;t buy that, personally.</p>
<h2>Streaming</h2>
<p>When YouTube was founded it was facing severe scaling problems (because video processing and streaming is extremely expensive). Fortunately for them, Google saw potential in the platform and purchased the company for $1.65bn in Google stock, and their money issues were over. Google was throwing money into scaling the platform, and it was experiencing great growth. This success turned out to be a major problem for the YouTube, because, from the time it was purchased it has been making a loss. In recent years, YouTube has become profitable, however, without the bottomless pockets of Google behind it, they never would have been able to accomplish this. What incentive could Google have to take losses year after year on YouTube? Well, it turns out user data is particularly delicious. Mastercard&#39;s CEO has infamously said <a href="https://www.cnbc.com/2017/10/24/mastercard-boss-just-said-data-is-the-new-oil.html">&quot;data is the new oil&quot;</a>. I personally can&#39;t wait for Facebook, Amazon, and Google to become para-military organisations in the up-coming data wars.</p>
<p>YouTube has essentially bullied their way into market dominance using Google&#39;s bottomless pit of money. This is problematic because it allows failing companies to cheat death. Just like a bottom-feeding fish,latching onto a whale shark and hitching a ride. As I mentioned before, video streaming is extremely expensive, so it makes sense that great cloud infrastructure is a prerequisite to success. Well, big surprise, Google offers world-class commercial cloud infrastructure with Google Cloud Platform (GCP)! Do you suppose YouTube is paying full price for their infra?</p>
<p><img src="https://media.graphcms.com/FJYcahNfTky0e3q9lALy" alt="A whale shark with small fish adhered to the top of it. YouTube, Google Play, and Google Plus logos are superimposed on the small fish heads"></p>
<p>So, when you see a headline that says &quot;<a href="https://bgr.com/2020/07/30/google-one-free-phone-backup-ios-android/">Stop paying for iCloud – Google One will now back up your iPhone for free</a>&quot;. Before obeying the shill who wrote it, you should ask yourself, &quot;How can a company afford to give away so much storage space for free?&quot;. Well, they can&#39;t. Google simply obscures their losses with the immense revenue from Google Ads in the profit/loss statement at the end of each quarter. For more reading about this topic, Tim Bray has a fantastic article called <a href="https://www.tbray.org/ongoing/When/202x/2020/06/25/Break-Up-Google">&quot;Break up Google&quot;</a>.</p>
<h2>Mobile</h2>
<p>This article is already becoming too long, so I&#39;m just going to cover mobile quickly. As Tim Bray mentions in the article above, Android isn&#39;t really a business. The only real non-ad revenue they have is from the commission they get from app purchases and licensing fees from OEMs (e.g. Samsung, Huawei, LG). How, then, are they able to sustain hundreds of highly paid engineers and all the other non-technical staff required to support the system?</p>
<p><img src="https://media.graphcms.com/IjNvNLdNQ067qLRMTNAV" alt="A map of the world with Android vs iOS market share. iOS is most dominant in first-world countries, whereas Android dominates emerging nations"></p>
<p>Above is a map of Android vs iOS market shares. You can see that iOS pretty much only has the dominant market share in first world countries (like USA, Canada, Australia, UK, Japan). Most of the emerging countries in the world are strongly in favour of Android because, unlike Apple, the OS is not restricted to a particular device. So, countries like India (where the number of smart-phone users has increased sharply from 199 million in 2015 to 401 million in 2020 <a href="https://www.statista.com/statistics/467163/forecast-of-smartphone-users-in-india/">source</a>) that mostly purchase low-cost phones (e.g. Huawei, Xiaomi, Oppo). Emerging markets are extremely important to companies like Google partly because these countries are easier to exploit because they don&#39;t have strong legislation to protect users from predatory advertising, anticompetitive tactics, or data privacy. This is why I speculate that Mastercard is scrambling to <a href="https://www.forbes.com/sites/tomgroenfeldt/2020/05/06/financial-inclusion-helps-refugees-move-from-aid-recipients-to-earners-and-tax-payers">connect refugees to the global payment network</a> (Remember that quote from the Mastercard CEO: &quot;Data is the new oil&quot;) and, indeed, why Mastercard <a href="https://www.jihadwatch.org/2018/08/patreon-and-mastercard-ban-robert-spencer-without-explanation">forced Patreon to ban Robert Spencer for his anti-refugee sentiment</a>.</p>
<p>Again, regardless of whether you agree with someone&#39;s political leaning or rhetoric, I shouldn&#39;t have to explain why it&#39;s ludicrous for people to believe that faceless, soulless corporations such as MasterCard or Google give two fucks about moral righteousness when their only servant is a number ticker on the Nasdaq website.</p>
<h2>Closing thoughts</h2>
<p>So, after reading all of that, I have to ask:</p>
<p>Why don&#39;t you route all of your web traffic through Google Servers?</p>
<p><img src="https://media.graphcms.com/bhLTDOmQ2CLLkbtBY2cM" alt="Google DNS logo"></p>
<p>To be clear, I&#39;m not accusing Google of storing DNS logs or associating that with specific users (they claim that they don&#39;t in their terms of service), however, I think it&#39;s unreasonable to think that they wouldn&#39;t be capable of that. I also wouldn&#39;t put it past them to lie in terms of service considering their recent run-ins with the law (<a href="https://www.cbsnews.com/news/google-eu-fines-google-1-7-billion-for-blocking-advertising-rivals/">$1.7bn fine for anti-competitive behaviour</a>, <a href="https://www.nytimes.com/2019/09/04/technology/google-youtube-fine-ftc.html">$170m for violating children&#39;s privacy on YouTube</a>, <a href="https://www.theverge.com/2019/1/21/18191591/google-gdpr-fine-50-million-euros-data-consent-cnil">50 million euro fine for GDPR violations</a>).</p>
<p>$2bn doesn&#39;t matter to Google. It&#39;s a drop in the bucket, especially considering they would probably be able to freely harvest user data for months or even years before they&#39;re caught and slapped on the wrist. If a single user&#39;s search data is worth upwards of $10 a year (see the Safari Google default search engine deal) for Google, then the complete logs of their browsing history would be quite juicy indeed.</p>
<p>Okay, so that&#39;s verging on conspiracy theory I suppose. Maybe Google DNS will remain clean. How about you get a Google® Nest™ WiFi mesh router and let them inspect all of your web traffic that way?</p>
<p><img src="https://media.graphcms.com/9J70XCjSqml7NrvG9I5D" alt="Google Nest Wifi Router product photo"></p>
<p>Or perhaps you want to buy the new Pixel and give them advanced analytics about how you use your phone (<a href="https://www.searchenginejournal.com/google-privacy-lawsuit-android-apps/374952/">privacy class action lawsuit</a>), everywhere you go (Location History), how much physical activity you do (Google Fit), every article/video you engage with (Chrome), everything you buy (Google Pay - and incidentally, how much disposable income you have, so they can better target more relevant ads to you). All of these &quot;services&quot; are simply a ruse so that Google can build an extremely accurate profile about the type of consumer you are and target you with more advertising to turn you into a soulless consumer.</p>
<p>I don&#39;t want these people to also be the arbiters of what content I should or should not be able to see online.</p>
<h2>Actual closing thoughts</h2>
<p>Well that was pretty depressing. So, how can you reclaim a shred of your privacy?</p>
<h3>Search</h3>
<p>There&#39;s a swathe of privacy-focused alternatives popping up these days. I personally use <a href="duckduckgo.com">duckduckgo.com</a> which is built on the Bing search API and does not track any user data. I&#39;ll concede that Duckduckgo doesn&#39;t have as good search results, but I&#39;m okay with that. Another one is <a href="https://www.startpage.com/">https://www.startpage.com/</a> which actually uses Google results, but ensures Google can&#39;t track your activity.</p>
<h3>Streaming</h3>
<p>I&#39;m currently using <a href="invidio.us">invidio.us</a> which, like Startpage, is just a wrapper for YouTube. So you can get the same content minus the tracking. Bonus, check out <a href="https://addons.mozilla.org/en-US/firefox/addon/invidition/">Invidition on the mozilla extension store</a> to open all youtube links in invidio.us instead.</p>
<h3>Mobile</h3>
<p>I really don&#39;t have an answer for this one. I&#39;m an iPhone user, but really, Apple is not much better, especially if you care about having a repairable device. If you really want to go hardcore there&#39;s some custom Android forks like <a href="https://grapheneos.org/">https://grapheneos.org/</a></p>
<h3>Browsing</h3>
<p>I didn&#39;t really touch on Chrome, but I&#39;m not happy with Chrome either. Since Edge has <a href="https://www.lifewire.com/what-it-is-chromium-edge-4842127">switched to using Chromium</a> the only real competitor (i.e. non-Chromium) in the browser market share is Safari. I use Firefox because I believe in Mozilla and their commitment to maintaining privacy. They&#39;re doing good stuff lately.</p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[OLAD (one lift a day) results so far]]></title>
      <link>https://lukeboyle.com/blog/olad-results-so-far/</link>
      <guid>https://lukeboyle.com/blog/olad-results-so-far/</guid>
      <pubDate>Sat, 25 Jul 2020 00:00:00 GMT</pubDate>
      <description><![CDATA[OLAD (one lift a day) results so far]]></description>
      <content:encoded><![CDATA[<p>3 months ago I started a <a href="/blog/posts/experimenting-with-olad">new OLAD program</a>. So far it has been a massive success. My lifts are up considerably, and I&#39;m just generally enjoying my time in the gym.</p>
<h2>Before</h2>
<p>Back Squat</p>
<p>95kg x 3</p>
<p>Overhead press</p>
<p>67.5kg x 1</p>
<p>Farmer Carry</p>
<p>140kg (15 meters) x 2)</p>
<p>Bench Press</p>
<p>80kg x 5</p>
<p>Deadlift / Deficit Deadlift</p>
<p>125kg x 5</p>
<p>Pendlay Row</p>
<p>70kg x 5</p>
<p>Pull up</p>
<p>BW(115.7kg)x3</p>
<p>Trap bar deadlift</p>
<p>145kg x 1</p>
<h2>After</h2>
<p>Back Squat</p>
<p>115kg x 2 +16.35% 1RM</p>
<p>Overhead press</p>
<p>80kg x 2 +24.4% 1RM</p>
<p>Farmer Carry</p>
<p>168kg for 15 meters x 2 - +20.4% 1RM</p>
<p>Bench Press</p>
<p>110kg x 1 - +18.3%</p>
<p>Deadlift / Deficit Deadlift</p>
<p>165kg x 2 - +18.5%</p>
<p>Pendlay Row</p>
<p>85kg x 4 - +17.1%</p>
<p>Pull up</p>
<p>BW(122kg)x2</p>
<p>Trap bar deadlift</p>
<p>180kg x 3 - +35.2%</p>
<p>So how did I &quot;increase my 1RM by 18-35%&quot;? One simple trick! &quot;Adherence&quot; (and maybe a bit of residual newbie gains).</p>
<p>Adherence before OLAD</p>
<p><img src="https://media.graphcms.com/X9mfFiQsKOVLjSTw2zTg" alt="adherence-before.jpg"></p>
<p>Adherence after OLAD</p>
<p><img src="https://media.graphcms.com/JVV5MqEeQlWJrHEZ08y6" alt="adherence-after.jpg"></p>
<p>I really just owe this adherence to the renewed enjoyment I&#39;ve had in the gym. It&#39;s pretty great to get out of the gym within 45 minutes (excluding prehab/rehab). I was quite fatigued by the few cycles of 5/3/1 I had just done (not to mention the 1.5 hour workouts), so it made sense I was burned out.</p>
<p>All things considered, I&#39;m really happy with this program. After my second knee dislocation in 2019 I didn&#39;t expect to be squatting again but here we are. It&#39;s not the most encouraging sign when your bench is beating your squat, but I&#39;m not giving up.</p>
<h2>Lifts</h2>
<h3>Strict overhead press - 80kg (~176 pounds) x 2</h3>
<h3>Trap bar deadlift - 180kg(~396 pounds) x 3</h3>
<h3>Squat</h3>
<p>I have no recent squat footage so here&#39;s some of this weird reverse safety bar front squat thing I got from <a href="https://www.youtube.com/watch?v=mUAOLEPEuV0">John Meadows</a></p>
<h3>Bench - 105kg(~230 pounds) x 2</h3>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[Experimenting with OLAD (One lift a day)]]></title>
      <link>https://lukeboyle.com/blog/experimenting-with-olad/</link>
      <guid>https://lukeboyle.com/blog/experimenting-with-olad/</guid>
      <pubDate>Fri, 10 Apr 2020 00:00:00 GMT</pubDate>
      <description><![CDATA[Experimenting with OLAD (One lift a day)]]></description>
      <content:encoded><![CDATA[<p>New lifters often place far too much importance on choosing the right program. Stronglifts or Starting Strength? 5/3/1 or PHUL? In reality, the right program is the one that gets you in the gym consistently. For 9 months (minus a couple for a patellar dislocation) I haven&#39;t been running a program, instead I get to the gym in the morning and decide on my core lift. The chosen exercise depends on a few factors like, how much energy I have, how my joints feel, how <a href="https://www.youtube.com/watch?v=s8IfPBA2kkA">INTENSE</a> I&#39;m feeling...</p>
<p>My progress plateau might suggest that this experiment was a failure, however, it has made the gym far more enjoyable than cranking out the same repetitive workout week in and week out. I also noticed that I end up spending far more time on the core exercise and often won&#39;t add any accessories. Workouts are overall shorter and more satisfying. I&#39;ve also been able to rotate in more variations (e.g. push press, pin press,safety bar squats, deficit deadlifts) which helps with lift boredom. The next logical step from here would be to make my workouts more consistent and strategic.</p>
<p>The One Lift a Day (OLAD) system has gained more popularity in recent years. Eric Bugenhagen has been championing OLAD <a href="https://youtu.be/gcr4aVLHaXI">for years</a> and Alec Enkiri recently broke down his <a href="https://youtu.be/yfWwfEwA1jU">OLAD program</a>. Given his insane strength and general athleticism (585lb deadlift, 4.5 second 40, 60&quot; box jumps) it&#39;s always interesting to see how his programs reflect that. I challenge you to find a cookie cutter program that includes resisted sprints. The exercise selection with OLAD is entirely up to you and should be based on your goals, but Alec suggests including a squat, hip hinge, loaded carry, horizontal press, vertical press, upper body pull.</p>
<p>For rep schemes and progression I turned to Dan John&#39;s <a href="https://www.t-nation.com/workouts/one-lift-a-day-program">one lift a day program</a> This program is built on 4 week cycles like so:</p>
<ul>
<li>Week 1: 7 sets of 5</li>
<li>Week 2: 6 sets of 3</li>
<li>Week 3: 5/3/2</li>
<li>Week 4: Off</li>
</ul>
<p>I&#39;m going to be doing this program for 3 months (i.e. 3 of these 4 week cycles). My exercises (with a recent set in brackets):</p>
<ul>
<li>Monday: Back Squat (95kg x 3)</li>
<li>Tuesday: Push Press (no recent sets)</li>
<li>Wednesday: Farmer Carry (140kg for 15 meters x 2)</li>
<li>Thursday: Bench Press (80kg x 5)</li>
<li>Friday: Deadlift / Deficit Deadlift (no recent sets)</li>
<li>Saturday: Pendlay Row (70kg x 5)</li>
</ul>
<p>I&#39;ll be documenting progress for the next 3 months and we&#39;ll see if the gains gods bless me.</p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[Github Actions for web apps]]></title>
      <link>https://lukeboyle.com/blog/github-actions-for-web-apps/</link>
      <guid>https://lukeboyle.com/blog/github-actions-for-web-apps/</guid>
      <pubDate>Mon, 12 Aug 2019 00:00:00 GMT</pubDate>
      <description><![CDATA[Github Actions for web apps]]></description>
      <content:encoded><![CDATA[<p>Arguably, the key feature that made Gitlab a market leading platform was their decision to build the platform as an end-to-end application delivery service including version control, CI, Infrastructure, community engagement, and so on. The simplicity that comes with this centralisation made Gitlab really stand out when compared to the Atlassian suite of Bitbucket, Jira, Bamboo. Even more when compared to Github at the time, since their market offering pretty much started and ended at git (with some other things like gh-pages, marketplace, etc).</p>
<p>It has been a couple years since Gitlab&#39;s rise to prominence and the market has certainly shifted. Even before Github was acquired by Microsoft in Mid 2018 (<a href="https://github.blog/2018-06-04-github-microsoft/">source</a>), they were hard at work pushing out feature after feature.</p>
<p>Off the top of my head, I can recall these:</p>
<ul>
<li>Projects (Kanban boards with automated status changes)</li>
<li>Sponsor program</li>
<li>Package Registry (publishing for npm, NuGet, Ruby gems, all in the same platform)</li>
<li>Github Actions (my personal favourite)</li>
</ul>
<p>Github actions is now in open beta (you can opt in here: <a href="https://github.com/features/actions">https://github.com/features/actions</a>) and it enables you to set up containerised builds, testing, deployments in response to many github events (push, pull requests, tags, schedule).</p>
<p>The process is much the same as something like CircleCI, Travis, or Buildkite. The integration for CI checks on pull requests and commits has been in Github for years, allowing early warning for pull requests that break the build.</p>
<p>In this post I&#39;ll be showing you how to set up to build and release a single-page app running React.</p>
<p>Keep in mind that the v1 Github Actions syntax has been deprecated, so make sure you are looking at the yaml documentation. There&#39;s a handy warning at the top of the deprecated pages:</p>
<blockquote>
<p>The documentation at <a href="https://developer.github.com/actions">https://developer.github.com/actions</a> and support for the HCL syntax in GitHub Actions will be deprecated on September 30, 2019. Documentation for the new limited public beta using the YAML syntax is available on <a href="https://help.github.com">https://help.github.com</a>.</p>
</blockquote>
<p>Find the docs here: <a href="https://help.github.com/en/categories/automating-your-workflow-with-github-actions">https://help.github.com/en/categories/automating-your-workflow-with-github-actions</a></p>
<p>For this example, I&#39;ll be using <a href="https://github.com/facebook/create-react-app">Create React App</a>. Initialise that if you&#39;d like to follow along, or just retrofit an old, simple project.</p>
<p>There&#39;s two flows I want to create</p>
<ul>
<li>CI Only</li>
<li>CI and Deploy</li>
</ul>
<p>Let&#39;s create the action file.</p>
<p>Create a folder in the root of your repo <code>.github/workflows</code> Create a file in that folder called <code>ci.yml</code></p>
<p>Let&#39;s look at the ci.yml file and add some boilerplate</p>
<p><code>ci.yml</code></p>
<pre><code class="language-yaml">name: CI

on: [pull_request, push]

jobs:
    build:
        runs-on: ubuntu-18.04

        steps:
            - uses: actions/checkout@master
            - name: Use Node.js 10.x
              uses: actions/setup-node@v1
              with:
                  version: 10.x
            - name: Build
              run: |
                  npm install
                  npm run build --if-present
</code></pre>
<p>The first thing to note is on line 3, there is an option called <code>on</code> (<a href="https://help.github.com/en/articles/configuring-a-workflow#triggering-a-workflow-with-events">docs for <code>on</code></a>. This field is a list of signals you want to respond to. For this one, I&#39;m only doing it on pull request. Because this <code>on</code> property is at the top level, regrettably you can&#39;t combine all your steps and choose not to run some steps on pull request. This is the reason for having two separate action files. In principle, the actions should be entirely self contained processes.</p>
<p>The jobs is a list of independent actions. By default, they run in parallel. You could use this to separate things like your unit and integration tests to speed up your CI. This example is pretty simple, so I haven&#39;t found a use for the jobs yet.</p>
<p>The steps field is quite simple in this example. For each step, you can chose to specify the <code>uses</code> field (<a href="https://help.github.com/en/articles/configuring-a-workflow#referencing-actions-in-your-workflow">docs</a>). The format for this argument is <code>[owner]/[repo]@[ref]</code> or <code>[owner]/[repo]/[path]@[ref].</code>. You can reference actions in your current repository or you can reference standard actions as per the example above. <code>actions/checkout@master</code> checks out the current branch. <code>actions/setup-node@v1</code> sets up Node, probably through a Docker container. You can provide arguments to the action using the <code>with</code> key.</p>
<p>Now, the magic begins. Go to your repository and visit: <code>https://github.com/[yourName]/[yourRepo]/actions</code>. You&#39;ll be prompted to enable Actions for this repository. Hit enable and then commit your <code>ci.yml</code> file, push it up and check the Actions tab. You should begin to see your commits start popping up under the relevant action.</p>
<p><img src="https://media.graphcms.com/aEUbMRgISyCqln99ZcQA" alt="Github actions, list of builds"></p>
<p>In the image below, you can see the left side has the name of the action, the event that triggers it, and the jobs below that.</p>
<p><img src="https://media.graphcms.com/uZBSDGdVTLWMCTw8rAFp" alt="Github Action build page"></p>
<p>With luck, we now have our CI build successfully running. Onto the deployment action. Copy the below to your ci.yml</p>
<p><code>ci.yml</code></p>
<pre><code class="language-yaml">name: CI

on:
    pull_request:
    push:
        branches:
            - master

jobs:
    build:
        runs-on: ubuntu-18.04

        steps:
            - uses: actions/checkout@master
            - name: Use Node.js 10.x
              uses: actions/setup-node@v1
              with:
                  version: 10.x
            - name: Build
              run: |
                  npm install
                  npm run build --if-present
            - name: Deploy
              if: github.event_name == &#39;push&#39; &amp;&amp; github.ref == refs/heads/master
              env:
                  AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
                  AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET }}
              run: node scripts/deploy.js
</code></pre>
<p>You&#39;ll note that at the moment we&#39;re executing this on both:</p>
<ul>
<li>pushes to master branch</li>
<li>pull requests</li>
</ul>
<p>This means that unless we add a filter, we&#39;d be deploying branches on any pull request, which could probably break our app.</p>
<p>To the <code>Deploy</code> step, we&#39;ve added an if. This if should have a boolean value that will determine whether to run the step or not.</p>
<p>You could do things like check if a step was successful, or in our case:</p>
<ul>
<li>Make sure the event is a push</li>
<li>Make sure the branch is master</li>
</ul>
<p>Moving onto deployment, if you look at the env key, this is how we provide environment variables to the step. These are accessible in node scripts via <code>process.env</code>. <code>SOME_API_KEY</code> in this example is a hardcoded string. Github also provides a secrets manager within your repository. Don&#39;t worry about that node script yet.</p>
<p><img src="https://media.graphcms.com/UGgct0s3QO9pdEvv7r2n" alt="secrets.jpg"></p>
<p>At a previous job, they outlawed all external CI services because they were worried about their AWS IAM keys getting out in the event of a CircleCI data breach. Given that we&#39;re dealing with Github + MSoft, I have to believe there&#39;s some encryption magic happening when you upload and access these secrets. Once you&#39;ve set the value in the secrets, you will not be able to see it again and it will only be exposed to the CI agent.</p>
<p>I tried to log one of these secrets, and cleverly, it was censored in the logs (see below). Gone are the days of having to rotate your IAM keys because you accidentally logged it in your CI or Cloudwatch.</p>
<p><img src="https://media.graphcms.com/S2d1kr5ySHnmriJf17wU" alt="Secrets are censored in the build logs"></p>
<p>I&#39;ll come back to those AWS secrets shortly. From this point, all we have to do is deploy. I&#39;m going to offer three suggestions:</p>
<ul>
<li>AWS S3 static web hosting</li>
<li>Github pages</li>
<li>Now.sh <strong>Tutorial coming soon</strong></li>
</ul>
<p>I would argue that S3 is superior to Github Pages. The unfortunate part of Pages is that it can only serve from files in the repository, so you have to commit your built files in order to host. However, Pages are free forever, unlike S3 sites which will begin to cost if you start having significant traffic. If performance is a concern for you, look elsewhere as neither of these are going to be blazing fast.</p>
<p>I&#39;d suggest going with Github pages for simplicity as you&#39;ll avoid setting up an additional account (and potentially save $$).</p>
<p>Most sites I make are not under high demand, nor do they have many concurrent users, so for my purposes, S3 storage is more than enough.</p>
<p>I also use Cloudflare to cache the assets, so the majority of sessions download assets off the Cloudflare CDN, rather than S3, so my usage stays very low for S3. This also has the benefit of using Cloudflare&#39;s smart routing to make my Sydney hosted S3 bucket much faster for international users.</p>
<h2>S3 Deployment</h2>
<p>See the example repository here: <a href="https://github.com/3stacks/github-actions-react-s3">https://github.com/3stacks/github-actions-react-s3</a></p>
<p>First I&#39;ll quickly go through how to get your S3 bucket and IAM keys and be a bit responsible in the process.</p>
<h3>Create the bucket</h3>
<ul>
<li>Go to your AWS panel and navigate to S3.</li>
<li>Click <code>Create Bucket</code> and give it a url friendly name the same as the domain you will use for.</li>
<li>Choose whatever region is most appropriate for you. I chose Sydney (ap-southeast-2) because most of my traffic is Australian</li>
<li>Skip step 2</li>
<li>On step 3, untick the <code>Block all public access</code> checkbox</li>
<li>Visit your bucket, go to Permissions, then to Bucket Policy and paste the below in (replacing the arn)</li>
</ul>
<pre><code class="language-json">{
    &quot;Version&quot;: &quot;2012-10-17&quot;,
    &quot;Statement&quot;: [
        {
            &quot;Sid&quot;: &quot;PublicReadGetObject&quot;,
            &quot;Effect&quot;: &quot;Allow&quot;,
            &quot;Principal&quot;: &quot;*&quot;,
            &quot;Action&quot;: &quot;s3:GetObject&quot;,
            &quot;Resource&quot;: &quot;arn:aws:s3:::your-arn-here/*&quot;
        }
    ]
}
</code></pre>
<p>With this policy, any user that queries can get any object in the bucket, so please, don&#39;t store anything private in there.</p>
<ul>
<li>In Properties, go to Static web hosting</li>
<li>check &quot;Use this bucket to host a website&quot;</li>
<li>make the index document <code>index.html</code></li>
<li>your endpoint will be displayed there</li>
</ul>
<h3>Creating the IAM user</h3>
<p>We&#39;re going to start by making a policy that is our deployment policy for this bucket. It ensures that if the keys to an IAM user leak all you&#39;ll be giving away is access to that single bucket.</p>
<ul>
<li>Go to IAM</li>
<li>Go to Policies on the left</li>
<li>Change tabs to the JSON editor, rather than the Visual Editor</li>
<li>Paste in the follow, replacing the ARN with your own bucket&#39;s ARN</li>
<li>Name your policy. I called mine [projectName]DeployPolicy</li>
</ul>
<pre><code class="language-json">{
    &quot;Version&quot;: &quot;2012-10-17&quot;,
    &quot;Statement&quot;: [
        {
            &quot;Sid&quot;: &quot;VisualEditor0&quot;,
            &quot;Effect&quot;: &quot;Allow&quot;,
            &quot;Action&quot;: &quot;s3:ListBucket&quot;,
            &quot;Resource&quot;: &quot;arn:aws:s3:::your-arn-here.io&quot;
        },
        {
            &quot;Sid&quot;: &quot;VisualEditor1&quot;,
            &quot;Effect&quot;: &quot;Allow&quot;,
            &quot;Action&quot;: [&quot;s3:PutObject&quot;, &quot;s3:GetObject&quot;, &quot;s3:DeleteObject&quot;],
            &quot;Resource&quot;: &quot;arn:aws:s3:::your-arn-here.io/*&quot;
        }
    ]
}
</code></pre>
<ul>
<li>On the left, navigate to Users</li>
<li>Create a User</li>
<li>Give it a relevant name ([projectName]DeployUser?), tick <code>Programmatic Access</code></li>
<li>Select <code>Attach existing policies directly</code></li>
<li>Search for your newly created policy and attach it to the user</li>
<li>Click through the wizard</li>
<li>Take note of your Access Key ID and Secret access key</li>
</ul>
<h3>Storing and using the secrets</h3>
<ul>
<li>Visit <a href="https://github.com/3stacks/%7ByourProject%7D/settings/secrets">https://github.com/3stacks/[yourProject]/settings/secrets</a></li>
<li>Click &#39;Add a new secret&#39;</li>
<li>Call it <code>AWS_ACCESS_KEY_ID</code> and copy the corresponding value from your newly created IAM user</li>
<li>Repeat for <code>AWS_SECRET</code></li>
</ul>
<p>Now your Github Action will pick these up in <code>ci.yml</code>. Copy the contents of the deployment script from here: <a href="https://github.com/3stacks/github-actions-react-s3/blob/master/scripts/deploy.js">https://github.com/3stacks/github-actions-react-s3/blob/master/scripts/deploy.js</a> to a directory (<code>./scripts/</code> is what was defined in <code>ci.yml</code>, but you can change this if you prefer a different directory). Make sure you update the S3 bucket name on line 24.</p>
<p>Your <code>ci.yml</code> workflow should resemble the below:</p>
<p><code>ci.yml</code></p>
<pre><code class="language-yaml">name: CI

on:
    pull_request:
    push:
        branches:
            - master

jobs:
    build:
        runs-on: ubuntu-18.04

        steps:
            - uses: actions/checkout@master
            - name: Use Node.js 10.x
              uses: actions/setup-node@v1
              with:
                  version: 10.x
            - name: Build
              run: |
                  npm install
                  npm run build --if-present
            - name: Deploy
              if: github.event_name == &#39;push&#39; &amp;&amp; github.ref == &#39;refs/heads/master&#39;
              env:
                  AWS_DEFAULT_REGION: ap-southeast-2
                  AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
                  AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET }}
              run: node ./scripts/deploy.js
</code></pre>
<p>Ensure you set the region you would prefer in the deploy env.</p>
<p>Now we&#39;re done! Commit those changes, push it and you&#39;ll see the build run and deploy your app.</p>
<p>Visit <code>http://[bucketName].s3-website-ap-southeast-2.amazonaws.com/</code> to verify.</p>
<p>From now on, commit on master and your code will be deployed automatically.</p>
<h2>Github pages deployment</h2>
<p>See the example repository here: <a href="https://github.com/3stacks/github-actions-react-pages">https://github.com/3stacks/github-actions-react-pages</a></p>
<p>Visit <code>https://github.com/[yourName]/[yourRepo]/settings</code> and scroll to the Github Pages section. Here you may enable github pages on the <code>master</code> branch or <code>gh-pages</code>, root folder (i.e. you build into root directory) or master branch /docs. I prefer to use a separate branch as it&#39;s generally advisable to keep your master branch clean of build files.</p>
<p>To enable the <code>gh-pages</code> branch, the repo must already have one. In your terminal, do the following:</p>
<pre><code class="language-bash">git checkout -B gh-pages
git push origin gh-pages
</code></pre>
<p>Back in your browser, select the <code>gh-pages</code> branch in the Pages dropdown (See below):</p>
<p><img src="https://media.graphcms.com/oeI03qciRcSo0BDBJ1rr" alt="Github pages setup"></p>
<p>From here, deployment is fairly painless. Let&#39;s take advantage of the Actions ecosystem Github is building and use: <a href="https://github.com/marketplace/actions/deploy-to-github-pages?version=1.1.2">https://github.com/marketplace/actions/deploy-to-github-pages?version=1.1.2</a>, an action written by <a href="https://github.com/JamesIves/github-pages-deploy-action">James Ives</a>.</p>
<p>First we have to generate a personal access token.</p>
<ul>
<li><p>Go to <a href="https://github.com/settings/tokens">https://github.com/settings/tokens</a></p>
</li>
<li><p>Click <code>Generate new token</code></p>
</li>
<li><p>Select the appropriate scopes. We only need repo related scopes (below)</p>
</li>
<li><p>Do not share this key with anyone. It has access read/write access all your repositories *</p>
</li>
</ul>
<p><img src="https://media.graphcms.com/TCQIZRXoRj6s1ztL9WBS" alt="Github access token scopes"></p>
<p>Add the secret as per the <a href="#STORING_AND_USING">Storing and using the secrets</a> section above, calling your access token secret <code>GITHUB_ACCESS_TOKEN</code></p>
<p>Back in <code>ci.yml</code>,</p>
<pre><code class="language-yaml">name: CI

on:
    pull_request:
    push:
        branches:
            - master

jobs:
    build:
        runs-on: ubuntu-18.04

        steps:
            - uses: actions/checkout@master
            - name: Use Node.js 10.x
              uses: actions/setup-node@v1
              with:
                  version: 10.x
            - name: Build
              run: |
                  npm install
                  npm run build --if-present
            - name: Deploy to GitHub Pages
              uses: JamesIves/github-pages-deploy-action@1.1.3
              if: github.event_name == &#39;push&#39; &amp;&amp; github.ref == &#39;refs/heads/master&#39;
              env:
                  ACCESS_TOKEN: ${{ secrets.GITHUB_ACCESS_TOKEN }}
                  BRANCH: gh-pages
                  FOLDER: build
</code></pre>
<p>Our secret and other required arguments will be provided to the Pages Deploy action using the <code>env</code> key.</p>
<h3>If you aren&#39;t using a custom domain</h3>
<p>Due to the way the routing is done in github pages, assets referencing <code>/</code> will go to the root of your Pages (e.g. <code>https://3stacks.github.io</code>). This means none of the assets in CRA will be loaded. To get around this, in your <code>package.json</code>, add <code>&quot;homepage&quot;: &quot;.&quot;,</code>. This will make it resolve correctly.</p>
<p>Now we&#39;re done! Commit those changes, push it and you&#39;ll see the build run and deploy your app.</p>
<p>Visit <code>http://[yourName].github.io/[repo-name]</code> to verify.</p>
<p>From now on, commit on master and your code will be deployed automatically.</p>
<h2>Now.sh deployment</h2>
<p><strong>COMING SOON - This section is not complete</strong></p>
<h2>Tidbits</h2>
<h3>Containerised Steps</h3>
<p>Github Actions also supports using specific Docker containers from Dockerhub. So if you have complicated dependencies, you can choose to utilise this option. Use the <code>uses</code> key and give it a path in the format of: <code>docker://[image]:[tag]</code></p>
<p><a href="https://help.github.com/en/articles/configuring-a-workflow#referencing-a-container-on-docker-hub">https://help.github.com/en/articles/configuring-a-workflow#referencing-a-container-on-docker-hub</a>)</p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[Software I actually believe in]]></title>
      <link>https://lukeboyle.com/blog/software-i-actually-believe-in/</link>
      <guid>https://lukeboyle.com/blog/software-i-actually-believe-in/</guid>
      <pubDate>Fri, 21 Dec 2018 00:00:00 GMT</pubDate>
      <description><![CDATA[Software I actually believe in]]></description>
      <content:encoded><![CDATA[<p>I do a lot of complaining about privacy and annoying products, but there are some that I believe really do a good job. These are some companies whose products and missions are appealing enough I&#39;d want to work there.</p>
<h2>YNAB (You Need A Budget)</h2>
<p>YNAB is a very interesting take on budgeting. I used to swear by my old way of using a spreadsheet, but it sort of falls apart when your pay is irregular (like for self-employed people or freelancers with variable income). You can connect your bank accounts for automatic transaction feeds but I prefer doing it manually as it seems to make you more mindful about your spending.</p>
<h3>Reporting</h3>
<p>I&#39;m a real metric head, so I appreciate some good graphs.</p>
<h4>Age of money</h4>
<p>Age of money tells you how long between getting paid do you spend your money. It&#39;s very encouraging to see yourself breaking the cycle of pay-cheque to pay-cheque.</p>
<p><img src="https://media.graphcms.com/MNZmhbELReKfDam86zf6" alt="age-of-money.jpg"></p>
<h4>Net worth</h4>
<p>Net worth is pretty self explanatory. It tracks your assets versus your debts and gives you a nice net worth graph over time.</p>
<p><img src="https://media.graphcms.com/HqmsYzBIRhyXeVdapb0u" alt="net-worth.jpg"></p>
<h4>Spending</h4>
<p>You also get a categorical breakdown of your spending which you can click into to see more specific information about each category</p>
<p><img src="https://media.graphcms.com/VToj0PllSWd0H5TqZ1eg" alt="spending.jpg"></p>
<h3>Free budgeting resources</h3>
<p>The bulk of their blog posts are not YNAB specific, but include general advice for budgeting, so if you&#39;re struggling, it may be helpful for you.</p>
<p>If that sounds appealing, there&#39;s a link below which includes a referral (if you sign up I get a free month. If you aren&#39;t cool with that, just search for YNAB). They offer a month long free trial if you feel like giving it a shot.</p>
<p><a href="https://ynab.com/referral/?ref=gHhYbKrXCgjj1zjM&utm_source=customer_referral">Read more here</a></p>
<h2>Cloudflare</h2>
<p>According to their website, Cloudflare now powers nearly 10 percent of all Internet requests. I&#39;ve been using them for a few years now and I&#39;m still in awe of them. First of all, when I started using them I was still paying for SSL certificates, then here comes this start-up that offers DDoS protection, SSL and caching and it&#39;s free... Where&#39;s the catch? I do find it somewhat suspicious that they&#39;re able to offer these services for free. Presumably the money they make off enterprise accounts offsets the usage at the free tier.</p>
<p>The DNS settings are really easy to use too. I use the analytics on this site and it seems to block a few threats a week.</p>
<p>They are now offering domain registrations which I haven&#39;t taken advantage of, but they seem to be cheaper than your run of the mill registrar.</p>
<p>They also do a lot of very interesting technical writing. Curious why their office has a wall covered in lava lamps?</p>
<p>Technical version - <a href="https://blog.cloudflare.com/lavarand-in-production-the-nitty-gritty-technical-details/">https://blog.cloudflare.com/lavarand-in-production-the-nitty-gritty-technical-details/</a> Non-technical version - <a href="https://blog.cloudflare.com/randomness-101-lavarand-in-production/">https://blog.cloudflare.com/randomness-101-lavarand-in-production/</a> Article about it - <a href="https://www.fastcompany.com/90137157/the-hardest-working-office-design-in-america-encrypts-your-data-with-lava-lamps">https://www.fastcompany.com/90137157/the-hardest-working-office-design-in-america-encrypts-your-data-with-lava-lamps</a></p>
<p>Check them out: <a href="https://www.cloudflare.com/">https://www.cloudflare.com/</a></p>
<h2>1Password</h2>
<p>Password managers are certainly rising in popularity and it&#39;s a good thing. The password is a very flawed authentication method especially when you re-use the same weak password across multiple sites. If you can instead remember one very strong password, you&#39;ll be able to generate strong, unique passwords for every service you use. They also now have built in support for the Google Authenticator protocol with TOTP tokens.</p>
<p>I really like the mobile and desktop apps and they have recently released a browser only client. Along with their provided cloud sync options, they also offer personal cloud storage syncing.</p>
<p>You can also enable travel mode for when you&#39;re overseas which stops syncing sensitive vaults.</p>
<p>I use the shared vaults a lot to share with coworkers.</p>
<p>I&#39;m planning on keeping this list updated should anything change, so keep your eyes on this post.</p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[My Muscle Chef: A case study for iterative development]]></title>
      <link>https://lukeboyle.com/blog/case-study-for-iterative-development/</link>
      <guid>https://lukeboyle.com/blog/case-study-for-iterative-development/</guid>
      <pubDate>Wed, 28 Nov 2018 00:00:00 GMT</pubDate>
      <description><![CDATA[My Muscle Chef: A case study for iterative development]]></description>
      <content:encoded><![CDATA[<p>Agile development is something that has evolved to become a bit of a joke in the software industry, much like an obscure gag amongst friends that evolves over time to the point where the humour is incomprehensible to anyone on the outside. Today, we may find ourselves being handed little laminated cut-outs with clipart of t-shirts on it and being implored to stick it on the wall, playing estimate poker, or writing love letters to team members in a retrospective meeting. In my experience, it seems to be common understanding amongst programmers that the ceremonies associated with Agile err on the side of bizarre, but businesses love it. In my estimation, it’s the idea that they are fostering a collaborative environment. Whether or not it’s just an illusion is another story, but in the age of Blockchain, chatbots, and machine learning, Agile is king.</p>
<p>&quot;Agile&quot; in its current sense appears to be derived from the <a href="http://agilemanifesto.org/principles.html">Agile manifesto</a>, however, agile practices have roots through the last 4 decades of programming history. Recently I read the Mythical Man Month (Brooks, 1975) and in it Brooks extolls the virtues of things like disposable prototypes, testing as you build, and always having a working program.</p>
<p>One of the most recognisable and user-friendly explanations of this concept is &quot;The Agile Bicycle&quot; illustrated by <a href="http://blog.crisp.se/2016/01/25/henrikkniberg/making-sense-of-mvp">Henrik Knilberg</a></p>
<p><img src="https://media.graphcms.com/jE4B6SqiQAWWQqNdnIco" alt="mvp bicycle"></p>
<p>This is a great example of delivering a minimum viable product (MVP). There are many benefits to this method:</p>
<p>Regardless of how rough around the edges your product is, if it is functional, then people can use it. It may not have the appeal to gain significant traction, but you can start getting at least some ROI, and - perhaps more importantly – user feedback. If a product is fundamentally flawed, it should be visible at any stage. According to Brooks, an incremental build method is better because:</p>
<ul>
<li>We can begin user testing very early, and</li>
<li>We can adopt a build-to-budget strategy that protects absolutely against schedule or budget overruns <strong>(at the cost of possible functional shortfall)</strong></li>
</ul>
<p>The most important part of that is that while we may not deliver the full feature set at the initial release date, at the very least, we’re not going to be giving people a car without a steering wheel.</p>
<p>So, how does a company selling pre-packaged meals relate to software MVPs?</p>
<p>I’ve been using them for around a year. I picked them because unlike similar competitors, they offered meals with higher calorie counts for a similar price point. My first delivery came in an unmarked Styrofoam box. Styrofoam is good at insulating contents; however, it requires specialised machinery to recycle and takes untold millions of years to degrade, it’s not a great material. The meals came in take-away style containers with a sticker slapped on which were easily broken in transit and they were all frozen. On the technical side, subscriptions were not manageable by the user and had to go through customer service, which added some friction. It wasn’t a mind-blowing experience, but the meals all tasted good and most importantly, the business model worked.</p>
<p><img src="https://media.graphcms.com/doa6SuMlQr6PTcAxeBgm" alt="foodz.jpg"></p>
<p>Over the last 12 months I’ve observed various improvements to their offering.</p>
<ol>
<li>They replaced the Styrofoam boxes with wool insulated cardboard boxes</li>
</ol>
<p><img src="https://media.graphcms.com/YVnyQWTgS9OFeTcu9G0k" alt="box.jpg"></p>
<ol start="2">
<li>They replaced the take-away style containers with vacuum sealed containers to allow the foods to last longer without being frozen. This paved the way for them to start offering fresh meals (which they have as of this week)</li>
<li>They upgraded their online services so that users can edit their subscriptions and more easily delay or cancel orders.</li>
<li>They improved their distribution to where they are now sold in retail spaces around the country, rather than simply being a drop at the door delivery service</li>
</ol>
<p>While people starting to use them now will see the last year of enhancements as the norm, people who have been using it for a longer period will have gradually had improved service, thus increasing satisfaction. Rather than overreaching and increasing the risk of being crushed by their overhead, My Muscle Chef took an iterative approach and gradually built a loyal base of customers which enables further innovation.</p>
<p>In my eyes, iterative development is inarguably superior to traditional waterfall project management where oftentimes budget, schedule and feature set are inflexible. As the saying goes, &quot;you don’t know what you don’t know&quot;, and as such, progressive discovery will often prove many of your initial assumptions incorrect. It’s very refreshing to see companies with more tangible products embracing Agile principles and prospering. As they say, the proof is in the pudding.</p>
<p>To be clear, I am in no way affiliated with this company, I just like eating their food. If you do end up signing up, consider using my referral code (S1HKD51IM) and we’ll both get $15 credit. Love those free meals.</p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[Converting a WordPress site to a React static site]]></title>
      <link>https://lukeboyle.com/blog/converting-wordpress-site-to-static/</link>
      <guid>https://lukeboyle.com/blog/converting-wordpress-site-to-static/</guid>
      <pubDate>Mon, 08 Jan 2018 00:00:00 GMT</pubDate>
      <description><![CDATA[Converting a WordPress site to a React static site]]></description>
      <content:encoded><![CDATA[<p>The last iteration of this website was a truly insane infinite scrolling carousel that was very overwhelming to anyone who dare behold it, so with this version (which recently had its first birthday) I decided to go with a much more content focused design since I actually wanted to start writing more publicly. There&#39;s also something to be said about not confusing people or forcing them into epileptic fits.</p>
<p>At the time, I didn&#39;t want to sink a lot of time into it, so WordPress was identified as the path of least resistance. I used Bedrock by Roots to version control my plugins and WordPress with Composer. It was working well and was quite fast (for a WordPress website), but it still suffered from a fairly fundamental issue of not being able to version control content. WP apologists might tell you to store your database dumps in your repo, but to them I say; &quot;yeah, nah&quot;. If you ever have the misfortune of looking at a WP database dump, you&#39;ll realise there’s about a billion lines of muck which is totally irrelevant to the content and composition of your website and I don’t particularly like the idea of storing my users table in a public git repository anyway. In spite of my whinging, the version controlled content pain point was more of an under-the-tongue ulcer type of pain than a broken arm so I didn&#39;t worry about it. One day I made the mistake of upgrading the WP version on my server and I hadn&#39;t copied the install to my local, so there was a lot of out of sync content. So you can imagine I was pretty happy when I found out my login no longer worked, I couldn’t reset my password and changing the password directly in the database didn&#39;t work. I took an sql dump of the database and loaded it into my local only to find the Advanced Custom Fields don’t appear to be stored in the database, so when I salvaged the content it was totally broken.</p>
<p>Then it hit me. What if I get a JSON dump of my posts from the database and turn that into a static version? So, what output format would be most suitable for an archive of text posts?</p>
<h3>Markdown: A New Hope</h3>
<p>Markdown was invented by notable &#39;f-word&#39; writer <a href="https://daringfireball.net">John Gruber</a> in 2004 and it has since become a staple in the development world. I chose to use Markdown as the output because it provides simple shorthands to represent markup so I knew I could get tidy archiving in Github that would be nicely rendered as html in the web view, but the posts would still be readable (and writable for future posts) when looking at the source. I created a <a href="https://www.npmjs.com/package/@lukeboyle/wordpress-to-markdown">node package</a> for generating an archive and published it to npm in the hopes that it might address the problem for other people too.</p>
<p>Now I have my posts nicely sorted and stored in a repo, but the problem with generating an archive of Markdown files is then you just have an archive of Markdown files to deal with.</p>
<p>The website is built with the static site generator &quot;Gatsby&quot;, so all pages are React components which really adds a lot of flexibility. For example, when generating blog post components I can make the title render as a link to the blog post slug but only when it appears on the front page.</p>
<p>The ingestion strategy is to add the blog-posts repository as a submodule so I can then update and push those independently. Then, at compile time, I would read the archive of blog posts and generate:</p>
<ol>
<li>A root blog page that lists the content of posts in reverse-chronological order with pagination</li>
<li>An individual page for each blog post.</li>
</ol>
<p>The script that is responsible for this is really something to behold (you can see that <a href="https://github.com/3stacks/portfolio-2016/blob/master/scripts/blog-post.js">here</a>). The process is such that all markdown files are grabbed from the archive, then for each post, the script will parse out a metadata table in the top of the file that has the post title and whether or not it is a draft. That post is then passed to the markdown renderer and we generate a blog post component with that rendered content. That blog post component is then given its own page component and it’s stitched onto the aggregate blog post list. The blog post list is then parsed out into pages which are output as components and voilà. I suppose if there’s a gap for it, I could publish a &quot;WordPress Markdown archive to React static site&quot; package, but it may be a bit too niche.</p>
<p>The end result is an overall slimmer repository since all of the blog posts are stored in a different repository and the generated pages are not committed which lends itself perfectly to an automated deployment service. It also allowed for much less human intervention in the creative process.</p>
<p>The main caveat I’ve discovered in this transition is that I didn&#39;t have a solution for porting assets (such as embedded images) to the markdown archive. Currently, any embedded images will 404 until they are added manually. This definitely isn&#39;t ideal and if I ever get a chance I plan to package all the linked assets down into each blog post.</p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[Project estimations made easy]]></title>
      <link>https://lukeboyle.com/blog/project-estimations-made-easy/</link>
      <guid>https://lukeboyle.com/blog/project-estimations-made-easy/</guid>
      <pubDate>Tue, 19 Dec 2017 00:00:00 GMT</pubDate>
      <description><![CDATA[Project estimations made easy]]></description>
      <content:encoded><![CDATA[<p>I recently published a post on the Stak Digital engineering blog about our new app <a href="https://guesstimate.io">Guesstimate</a> and project estimation in general. To read the post, head on over to the post here: <a href="https://stak.digital/blog/project-estimations-made-easy">https://stak.digital/blog/project-estimations-made-easy</a></p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[Responsive Definition Lists: Solved by flexbox]]></title>
      <link>https://lukeboyle.com/blog/responsive-definition-lists-solved-by-flexbox/</link>
      <guid>https://lukeboyle.com/blog/responsive-definition-lists-solved-by-flexbox/</guid>
      <pubDate>Wed, 29 Mar 2017 00:00:00 GMT</pubDate>
      <description><![CDATA[Responsive Definition Lists: Solved by flexbox]]></description>
      <content:encoded><![CDATA[<p>Consider the definition list. Here&#39;s a simple example. The standard behaviour would have the term and definition both as block level elements, naturally stacking down like so.</p>
<p>Term 1</p>
<p>A longer definiton. A definition usually expands on the term.🌚</p>
<p>Term 2</p>
<p>A longer definiton. A definition usually expands on the term.🌚</p>
<p>But what if we want the term and definition to sit inline? This usage is semantically a dl, but traditionally, this has been a serious pain in the ass if you want consistent spacing between the terms/definitions. The image below exhibits a compromise I made with the designer on a previous project. <img src="http://lukeboyle.com/app/uploads/2017/03/Screen-Shot-2017-03-29-at-10.58.59-pm.png" alt=""> Making the dt/dd inline-block works to a certain degree, however, when setting widths explicitly you will have serious issues going down the breakpoints. The <code>display:block</code> span just forces the content to stay in it&#39;s respective line. This, however, is not correct usage, as a <code>dl</code> is only supposed to have <code>dt</code> or <code>dd</code> elements inside it. EDIT: Since working on this project, it looks like we&#39;re now permitted to wrap a <code>dt+dd</code> group in a div to control flow. So how can flexbox help us here?</p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[CSS Variables: A Case Study]]></title>
      <link>https://lukeboyle.com/blog/css-variables-a-case-study/</link>
      <guid>https://lukeboyle.com/blog/css-variables-a-case-study/</guid>
      <pubDate>Sun, 26 Mar 2017 00:00:00 GMT</pubDate>
      <description><![CDATA[CSS Variables: A Case Study]]></description>
      <content:encoded><![CDATA[<p>In <a href="https://agander.io">Agander</a>, I made my first forays into colour themes. In a very simple approach, I have two colour schemes (light and dark) which are displayed on the body as a class (scheme-light and scheme-dark) respectively. The general approach for styling a component is as such: <code>_button.scss</code></p>
<pre><code>// Define base component styles (e.g. sizing/positioning)
.button {
  border: 1px solid;
  padding: 6px 5px;
}

// Dark Color scheme styles
.scheme-dark {
  .button {
    background: white;
    border-color: white;
    color: black;
  }
}

// Light Color scheme styles
.scheme-light {
  .button {
    background: black;
    border-color: black;
    color: white;
  }
}
</code></pre>
<p>Although this is quite lightweight, there are still issues.</p>
<ol>
<li>It puts a hard dependency on codebase changes to add, remove or modify themes,</li>
<li>It makes user defined colour schemes all but impossible</li>
<li>Simple component partials are no longer neat self-contained partials with one selector defining all the component styles</li>
<li>There are several cases where I need to have colours that contradict the global colour scheme (e.g. black text for the white modal dialog) and it requires the use of !important and many colour overrides.</li>
<li>The extensibility of the approach is very limited because as more themes are added, the stylesheets WILL get bloated and overweight.</li>
</ol>
<p>Enter the CSS Variable (the hero we need) CSS Variables are defined like so:</p>
<pre><code>:root {
  // Initialise the variable
  --primary-color: pink
}

p {
  color: var(--primary-color); // it&#39;s pink, baby.
}
</code></pre>
<p>The <code>var</code> function also takes a second argument which is an initial/fallback value.</p>
<pre><code>p {
  color: var(--primary-color, red);
}
</code></pre>
<p>CSS Variables follow block scoping principles, so, variables defined in <code>:root</code> are considered to be global variables (but may be overwritten inside specific components) and variables defined in any other element are scoped to that block of styles. This is broken down very nicely on a recent <a href="https://www.smashingmagazine.com/2017/04/start-using-css-custom-properties/#scope-and-inheritance">Smashing Magazine article</a>.</p>
<h3>How can CSS Vars help Agander?</h3>
<p>I recently wrote a library to ingest variable names and values and spit them onto the root element (see <a href="https://www.npmjs.com/package/@lukeboyle/sync-vars">the package</a>) The idea is that each theme would have all relevant variables defined in objects like so:</p>
<pre><code>const viewState = {
  currentTheme: &#39;darkScheme&#39;
}

const themes = {
  darkSheme = {
    &#39;primary-color&#39;: {
      hex: &#39;#FFF&#39;
    }
  },
  lightScheme: {
    &#39;primary-color&#39;: {
      hex: &#39;#000&#39;
    }
  }
}
</code></pre>
<p>And then when the currentTheme changes:</p>
<pre><code>import syncVars from &#39;@lukeboyle/sync-vars&#39;;

function updateCssVariablesWithCurrentScheme(colorScheme) {
  syncVars(themes[colorScheme]);
}

// if we call that function with &#39;darkScheme&#39;
updateCssVariablesWithCurrentScheme(&#39;darkScheme&#39;);

&lt;html style=&quot;--primary-color: #FFF;&quot;&gt;&lt;/html&gt;
</code></pre>
<p>So, how does this help? For one thing, with this approach, I no longer have to worry about adding the colour scheme classes to the body, and I don&#39;t have to do any hacky overrides, etc. <code>_buttons.scss</code> now looks like this:</p>
<pre><code>.button {
  border: 1px solid var(--text-color-var);
  padding: 6px 5px;
  background: var(--button-background-color-var);
  color: var(--text-color-var);
}
</code></pre>
<p>Looking forward, this approach also means that custom colour themes are very nearly in reach. It also means that colour schemes could be changed on the fly. The user could have a colour swatch tool and be previewing their theme changes live. Taking it even further, it means that the colour schemes no longer need to be a part of the codebase. It could just as easily be a JSON file on the server and changes could be flexibly pushed. Why is this exciting? Say it&#39;s Christmas time and you want to get into the spirit of things... With a few string replacements you have a temporary festive theme to force upon your users.</p>
<h3>Other Applications</h3>
<h4>Accessibility</h4>
<p>Sites or apps could have buttons to activate color blind mode and specific &#39;problem&#39; colours could be swapped out for friendly colours. Additionally, high contrast modes would be a breeze.</p>
<h4>Easter Eggs</h4>
<p>Users could activate alternate modes for websites to get a different experience.</p>
<h3>Retrospective</h3>
<p>CSS variables are getting me really excited because it&#39;s the first minimal overhead approach to theming in front-end only applications. This is something that will reward well structured stylesheets and result in a better experience for the user. I am looking forward to rolling out custom themes in Agander and finally getting around to making the flat UI theme I have wanted to make for some time.</p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[CSS Buttons: Solved with Flexbox]]></title>
      <link>https://lukeboyle.com/blog/css-buttons-solved-with-flexbox/</link>
      <guid>https://lukeboyle.com/blog/css-buttons-solved-with-flexbox/</guid>
      <pubDate>Thu, 09 Mar 2017 00:00:00 GMT</pubDate>
      <description><![CDATA[CSS Buttons: Solved with Flexbox]]></description>
      <content:encoded><![CDATA[<p>There are two commonly accepted approaches to making buttons with CSS, but both of them are a little bit shit. What if I told you there was another way? (<code>morpheus.wav</code>)</p>
<h2>Option 1: Padding for vertical centering (Blue Pill)</h2>
<pre><code class="language-html">&lt;style&gt;
    .button-padding-approach {
        font-size: inherit;
        -webkit-appearance: none;
        border-radius: 0;
        border-style: solid;
        border-width: 0;
        cursor: pointer;
        font-weight: normal;
        line-height: normal;
        margin: 0;
        position: relative;
        text-align: center;
        text-decoration: none;
        display: inline-block;
        padding: 1rem 2rem 1.0625rem 2rem;
        font-size: 16px;
        background-color: #999;
        color: #000;
        max-width: 170px;
    }
&lt;/style&gt;

&lt;div&gt;[A Button](#) [A Button that breaks to two lines](#)&lt;/div&gt;
</code></pre>
<p>This approach works okay, and it&#39;s good for multi-line (buttons where the marketing team sanctioned too much copy) text. The problem with typography, is that glyphs can have descenders (as in y and j) which push the bottom of the bounds down. So if you want to properly vertically center your text you have to baby the padding so much that it becomes too much of a pain in the ass. The padding on the above buttons is <code>padding: 1rem 2rem 1.0625rem 2rem;</code>. 5 significant figures for bottom padding? I don&#39;t think so.</p>
<h2>Option 2: Line Height for vertical centering (Red Pill)</h2>
<pre><code class="language-html">&lt;style&gt;
    .button-lineheight-approach {
        -webkit-appearance: none;
        border-radius: 0;
        border-style: solid;
        border-width: 0;
        cursor: pointer;
        font-weight: normal;
        line-height: normal;
        margin: 0;
        position: relative;
        text-align: center;
        text-decoration: none;
        display: inline-block;
        font-size: 16px;
        background-color: #999;
        color: #000;
        max-width: 170px;
        height: 50px;
        line-height: 50px;
        padding: 0 2rem 0;
    }
&lt;/style&gt;

&lt;div&gt;[A Button](#) [A Button that breaks to two lines](#)&lt;/div&gt;
</code></pre>
<p>This approach is a lot less hands on for the vertical alignment. You set <code>height: 50px;</code> and <code>line-height: 50px;</code> and voila, perfect vertical alignment. Until you need two lines and then it bleeds out of the button because you thought a CTA would never be more than 3 words long. At this point you&#39;re forced to either increase the button width, or reduce your font-size and neither are very designer friendly.</p>
<h2>Option 3: Flexbox (dubbed by me as the green pill)</h2>
<pre><code class="language-html">&lt;style&gt;
    .button-flexbox-approach {
        display: flex;
        justify-content: center;
        align-items: center;
        -webkit-appearance: none;
        border-radius: 0;
        border-style: solid;
        border-width: 0;
        cursor: pointer;
        font-weight: normal;
        line-height: normal;
        margin: 0;
        position: relative;
        text-align: center;
        text-decoration: none;
        padding: 1rem 2rem 1.0625rem 2rem;
        font-size: 16px;
        background-color: #34495e;
        color: #fff;
    }
    .button-flexbox-approach:hover {
        color: #fff;
    }
    .flex-button-container {
        display: inline-block;
    }
&lt;/style&gt;

&lt;div&gt;
    &lt;div class=&quot;flex-button-container&quot;&gt;[A Button](#)&lt;/div&gt;

    &lt;div class=&quot;flex-button-container&quot; style=&quot;max-width: 170px;&quot;&gt;
        [A Button that breaks to two lines](#)
    &lt;/div&gt;
&lt;/div&gt;
</code></pre>
<p>The main caveat of this approach is that the button now needs a container. The container doesn&#39;t need anything fancy on it, just <code>display: inline-block;</code> to allow the content to naturally scale, and if you want to restrict how large the button can be, add <code>max-width: x;</code> Other than that, this approach is pretty bullet-proof from my testing and I like it a lot.</p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[Functional Form Validation in JavaScript (aka: Inheriting bad JavaScript)]]></title>
      <link>https://lukeboyle.com/blog/functional-form-validation-in-java-script-aka-inheriting-bad-java-script/</link>
      <guid>https://lukeboyle.com/blog/functional-form-validation-in-java-script-aka-inheriting-bad-java-script/</guid>
      <pubDate>Mon, 30 Jan 2017 00:00:00 GMT</pubDate>
      <description><![CDATA[Functional Form Validation in JavaScript (aka: Inheriting bad JavaScript)]]></description>
      <content:encoded><![CDATA[<p>I was recently given the job of rebuilding a particularly bad landing page from an external company. Apart from class names, styles and markup being all over the place, there was a particularly obnoxious form validation script sitting in the middle of the page. An excerpt of the script can be seen below, and this documents the process I took when reviving the JS side of things.</p>
<pre><code>&lt;script type=&quot;text/javascript&quot;&gt;

  var flagValidation;

  /* validation for &#39;phone number&#39; */
  function PhoneNumberValidation() {
    var phoneNum = document.getElementsByName(&quot;Phone&quot;)[0].value;
    var normalPhonepattern = /^[0-9\s\-\+]{6,14}$/g;

    if(!normalPhonepattern.test(phoneNum))
    {
      flagValidation = false;
      document.getElementById(&quot;PhoneValidation&quot;).innerHTML = &quot;Only numbers, &#39;-&#39; and &#39;+&#39; characters are accepted&quot;
    }
    else
      document.getElementById(&quot;PhoneValidation&quot;).innerHTML = &quot;&quot;
  }

  function SubmitDetails(){
    flagValidation = true;
    PhoneNumberValidation();

    return flagValidation;
  }

&lt;/script&gt;
</code></pre>
<p>So what is wrong with this picture? - There&#39;s no reason for this to be a script tag on the page, let&#39;s make it an external script - Mutation - Basing the validation on mutating the variable to false should not be the responsibility of these functions - The flagValidation variable being globally scoped and mutated/used in several places leaves a lot of places for it to fail when making changes - The functions are doing too much. When looking at it from a functional standpoint, they should just be returning a bool, and a final validate function can follow up. - Repeating code (e.g. <code>document.getElement...</code>) unnecessarily When you allow your functions to be purely functional, this function...</p>
<pre><code>  function PhoneNumberValidation() {
    var phoneNum = document.getElementsByName(&quot;Phone&quot;)[0].value;
    var normalPhonepattern = /^[0-9\s\-\+]{6,14}$/g;

    if(!normalPhonepattern.test(phoneNum))
    {
      flagValidation = false;
      document.getElementById(&quot;PhoneValidation&quot;).innerHTML = &quot;Only numbers, &#39;-&#39; and &#39;+&#39; characters are accepted&quot;
    }
    else
      document.getElementById(&quot;PhoneValidation&quot;).innerHTML = &quot;&quot;
  }
</code></pre>
<p>Can become...</p>
<pre><code>function isPhoneNumberValid() {
  const phoneNumber = document.getElementsByName(&quot;Phone&quot;)[0].value;
  const phoneNumberRegex = /^[0-9\s\-\+]{6,14}$/g;
  return phoneNumberRegex.test(phoneNumber);
}
</code></pre>
<p>Much prettier, right? Once we&#39;ve refactored all of those individual functions, the main input validation function looks like this:</p>
<pre><code>function validateFormInputs(event) {

    let isFormValid = true;
    const phoneNumberFeedback = document.getElementById(&quot;PhoneValidation&quot;);

    if (isPhoneNumberValid()) {
        phoneNumberFeedback.innerHTML = &#39;&#39;;
    } else {
        phoneNumberFeedback.innterHTML = &quot;Only numbers, &#39;-&#39; and &#39;+&#39; characters are accepted&quot;;
        isFormValid = false;
    }

    if (isFormValid) {
        contactForm.removeEventListener(&#39;submit&#39;, validateFormInputs);
        return true;
    } else {
        event.preventDefault();
    }

}
</code></pre>
<p>It&#39;s cleaner, sure, but I&#39;m still not okay with using and mutating that <code>isFormValid</code> variable and <code>innerHTML</code> appearing every other line. Let&#39;s take it further. Let&#39;s outsource the error message work to a utility function.</p>
<pre><code>function generateErrorMessage(element, message) {
  return element.innerHTML = message;
}

// So we use that like this...

if (isPhoneNumberValid()) {
  generateErrorMessage(phoneNumberFeedback, &#39;&#39;);
} else {
  generateErrorMessage(phoneNumberFeedback, &#39;Cannot be empty&#39;);
  isFormValid = false;
}
</code></pre>
<p>The next step is to stop mutating that validity flag. To do this, I&#39;m going to bundle all the validation methods into an object and then reduce that to return an isFormValid bool.</p>
<pre><code>const fields = {
  phoneNumber: {
    isFieldValid: function() {
      const phoneNumber = document.getElementsByName(&quot;Phone&quot;)[0].value;
      const phoneNumberRegex = /^[0-9\s\-\+]{6,14}$/g;
      return phoneNumberRegex.test(phoneNumber);
    },
    userFeedbackElement: document.getElementById(&quot;PhoneValidation&quot;),
    errorMessage: &quot;Only numbers, &#39;-&#39; and &#39;+&#39; characters are accepted&quot;
  }
};

// Generate an array from the keys of the methods object and reduce
Object.keys(validationMethods).reduce((acc, curr) =&gt; {
    // do stuff
}, true);
</code></pre>
<p>If you&#39;re not familiar with <code>Array.reduce</code>, it will iterate over each item in the array and allow you to process them. The arguments are <code>acc</code> (accumulative) and <code>curr</code> (current). The idea is, we&#39;re going to execute each function and then show/hide error messages accordingly. The function now looks like this:</p>
<pre><code>function validateFormInputs(event) {

  const isFormValid = Object.keys(fields).reduce((acc, curr) =&gt; {
    const currentField = fields[curr];

    if (currentField.isFieldValid()) {
      generateErrorMessage(currentField.userFeedbackElement, &#39;&#39;);
      return acc;
    } else {
      generateErrorMessage(currentField.userFeedbackElement, currentField.errorMessage);
      return false;
    }
  }, true);

  if (isFormValid) {
    contactForm.removeEventListener(&#39;submit&#39;, validateFormInputs);
    return true;
    } else {
      event.preventDefault();
  }

}
</code></pre>
<p>This implementation is clearly a case-by-case basis. It works for my particular scenario because there&#39;s only one validation condition for each field. If there were more rules, the approach would need to be changed to compensate and it may not be able to be as dynamic. It should also be noted that this is a fairly over-engineered solution. I wouldn&#39;t say that the original approach is <em>wrong</em>, but my approach looks at the same problem from a functional programming standpoint and I believe it is much cleaner and much more robust. For a view of the entire file, see my gist at <a href="https://gist.github.com/3stacks/c5c49904684e4ddec48aa017ab912db9">https://gist.github.com/3stacks/c5c49904684e4ddec48aa017ab912db9</a></p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[Automating CSS regression testing with Argus Eyes (PhantomJS)]]></title>
      <link>https://lukeboyle.com/blog/automating-css-regression-testing-with-argus-eyes-phantom-js/</link>
      <guid>https://lukeboyle.com/blog/automating-css-regression-testing-with-argus-eyes-phantom-js/</guid>
      <pubDate>Wed, 14 Dec 2016 00:00:00 GMT</pubDate>
      <description><![CDATA[Automating CSS regression testing with Argus Eyes (PhantomJS)]]></description>
      <content:encoded><![CDATA[<p>I have had my eyes on Argus Eyes (<a href="http://arguseyes.io/">http://arguseyes.io/</a>) for quite some time and now I have the time to implement it at work. The interface is rather simple. You define your browser breakpoints, the pages, and the parts of the pages you wish to capture. All <code>components</code> are defined with a name and a selector. For example, &quot;.site-nav&quot; or &quot;body&quot;. You define all components in the components array, but then you can cherry pick which ones are used on each page. Such as, homepage may use the hero component, but about may not.</p>
<pre><code class="language-json">{
    &quot;sizes&quot;: [&quot;320x480&quot;, &quot;1280x768&quot;, &quot;1920x1080&quot;],
    &quot;pages&quot;: [
        {
            &quot;name&quot;: &quot;homepage&quot;,
            &quot;url&quot;: &quot;http://localhost:3000/&quot;,
            &quot;components&quot;: [&quot;hero&quot;, &quot;all&quot;]
        }
    ],
    &quot;components&quot;: [
        {
            &quot;name&quot;: &quot;all&quot;,
            &quot;selector&quot;: &quot;body&quot;
        },
        {
            &quot;name&quot;: &quot;hero&quot;,
            &quot;selector&quot;: &quot;.hero&quot;
        }
    ]
}
</code></pre>
<p>Since I&#39;m generally against installing npm packages globally (and you probably should be <a href="https://www.sitepoint.com/solve-global-npm-module-dependency-problem/">too</a>), I define my capture scripts in <code>package.json</code>. This presents the first issue: The usage of Argus is like so: <code>argus-eyes capture &lt;branch-name&gt;</code> But this of course only names the capture for you. It&#39;s your responsibility to switch branches. So the workflow becomes:</p>
<ul>
<li>Clone <code>develop</code> branch</li>
<li>run <code>argus-eyes capture develop</code> (this is the baseline)</li>
<li>Clone <code>feature-branch-name</code></li>
<li>run <code>argus-eyes capture feature-branch-name</code></li>
<li>run <code>argus-eyes compare develop feature-branch-name</code></li>
</ul>
<p>Argus then uses blink-diff to compare the two sets of screenshots you just captured (note, you shouldn&#39;t change your config between captures) and outputs any screenshots in which there are visual differences. For example, bumping the padding on your nav will result in something like this. It&#39;s not a super intelligent representation, however, it does quickly show you that something is wrong. In my opinion, the current workflow makes it almost worth not bothering. So how do we make it a 1 step test?</p>
<h2>Automation</h2>
<p>I am attempting to simulate this entire process in node. For this, we&#39;ll need a few things.</p>
<ul>
<li>The ability to use git functions in node (<a href="http://www.nodegit.org/">http://www.nodegit.org/</a>)</li>
<li>The ability to execute console commands in node (for this, I am using <a href="https://www.npmjs.com/package/shelljs">shelljs</a>)</li>
</ul>
<p>I&#39;ve tried to make the node script as pure as possible. I created a file called <code>argus-test.js</code>. In that, there is an individual function for each git action. First is a function to initialise the repo.</p>
<pre><code class="language-javascript">/**
 * @param {string} path - path to the repository (.git)
 * @returns {Promise}
 */
function openRepository(path) {
    return Git.Repository.open(path);
}

// Path is based on current working directory
const repoPath = require(&quot;path&quot;).resolve(&quot;./.git&quot;);

openRepository(repoPath).then(...)
</code></pre>
<p>openRepository returns a Promise which has the reference to the repository in it. To act on the repository, we need to keep track of this returned value. Since all of the nodegit functions return Promises, we&#39;re going to be seeing a lot of <code>then</code>.</p>
<pre><code class="language-javascript">// Initialise this let to keep track of which branch we&#39;re on
let featureBranch;

/**
 * @param {Repository} repo - The reference to the repository object
 * @returns {Promise}
 */
function saveCurrentBranch(repo) {
    return repo.getCurrentBranch();
}

openRepository(repoPath).then(
    repo =&gt; {
        saveCurrentBranch(repo).then(repoName =&gt; {
            featureBranch = repoName;
        });
    },
    err =&gt; {
        // Usually would only happen if you give it the incorrect path
        throw new Error(error);
    }
);
</code></pre>
<p>Now we have a reference to the current feature branch, we&#39;ve got that stored for later. In the function where we set the featureBranch variable, we&#39;re going to execute our capture functions.</p>
<pre><code class="language-javascript">shell.exec(
    `node node_modules/argus-eyes/bin/argus-eyes.js capture ${featureBranch}`
);

// Successful output will say something like &quot;12 screenshots saved to .argus-eyes/feature-branch-name&quot;
</code></pre>
<p>This is the tricky part. We have to switch branch to whatever the base is (develop in this case). This is the biggest hurdle. Although the function is simple, if there are any uncommitted changes, the function may fail. Probably best to warn the user to make sure all changes are committed or stashed first.</p>
<pre><code class="language-javascript">/**
 * @param {Repository} repo - The reference to the repository object
 * @returns {Promise}
 */
function switchToDevelop(repo) {
    return repo.checkoutBranch(&#39;develop&#39;);
}

switchToDevelop(repo).then(...)
</code></pre>
<p>After successfully changing to develop, we still have to capture the branch and then compare them, which is done like so:</p>
<pre><code class="language-javascript">shell.exec(&#39;node node_modules/argus-eyes/bin/argus-eyes.js capture develop&#39;);

shell.exec(
    &#39;node node_modules/argus-eyes/bin/argus-eyes.js compare develop &#39; +
        featureBranch
);
</code></pre>
<p>If Argus detects any screenshots over the threshold for change, it will save the diff in a folder like <code>.argus-eyes/diff_develop_feature_branch_name</code> For the full file in action, check out this gist: <a href="https://gist.github.com/3stacks/0976ef8a84c50c6096aea09dbbbebd88">https://gist.github.com/3stacks/0976ef8a84c50c6096aea09dbbbebd88</a></p>
<h2>Retrospective</h2>
<p>To improve this process, it might be an idea to save the baseline diff in the repo and then overwrite it whenever you push to that branch. This would eliminate the need to switch over the branches.</p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[Local Storage Manager version 2.1 is out now]]></title>
      <link>https://lukeboyle.com/blog/local-storage-manager-version-2-1-is-out-now/</link>
      <guid>https://lukeboyle.com/blog/local-storage-manager-version-2-1-is-out-now/</guid>
      <pubDate>Wed, 19 Oct 2016 00:00:00 GMT</pubDate>
      <description><![CDATA[Local Storage Manager version 2.1 is out now]]></description>
      <content:encoded><![CDATA[<p>The latest version of local-storage-manager has had the internal interface greatly improved for tidiness and best practice, and now has the new Namespace feature. Traditionally, you would have to store your data like so:</p>
<pre><code>const appState = {
    key1: {...},
    key2: {...}
}
</code></pre>
<p>and set the data like this:</p>
<pre><code>localStorageManager.set(&#39;appData&#39;, appState);
</code></pre>
<p>The issue with this is you may not want <code>key1</code> and <code>key2</code> to be grouped together but don&#39;t want them to be tossed straight into the local storage. With namespaces you can do this:</p>
<pre><code>localStorageManager.set(&#39;key1&#39;, key1, &#39;myAppState&#39;);
localStorageManager.set(&#39;key2&#39;, key2, &#39;myAppState&#39;);
</code></pre>
<p>This makes it easier to access all of your data at once while still keeping those keys theoretically separate. When accessing the namespaced data, you simply add the namespace as the second arg like so:</p>
<pre><code>localStorageManager.get(&#39;key1&#39;, &#39;myAppState&#39;);
</code></pre>
<p>The app is now more robust internally and can handle cases of missing data better. It also uses the <code>getItem</code> and <code>setItem</code> methods internally instead of accessing the localStorage directly. To get started, install via npm with <code>npm install @lukeboyle/local-storage-manager</code> See the npm page with documentation and in depth instructions at - <a href="https://www.npmjs.com/package/@lukeboyle/local-storage-manager">https://www.npmjs.com/package/@lukeboyle/local-storage-manager</a></p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[Running Karma tests for Chrome in Travis CI]]></title>
      <link>https://lukeboyle.com/blog/running-karma-tests-for-chrome-in-travis-ci/</link>
      <guid>https://lukeboyle.com/blog/running-karma-tests-for-chrome-in-travis-ci/</guid>
      <pubDate>Thu, 13 Oct 2016 00:00:00 GMT</pubDate>
      <description><![CDATA[Running Karma tests for Chrome in Travis CI]]></description>
      <content:encoded><![CDATA[<p>A quick-start guide for running Karma tests for Chrome in Travis CI. When you run Travis on a Node.js project, Travis will - by default - run <code>npm install</code> and then <code>npm test</code>. I first ran into the issue in an Angular project that had tests triggered in the <code>prepublish</code> command. My CI build failed and I decided to remove the prepublish hook and change the name of my test script until I had the time to come back. For months I&#39;ve been avoiding the issue, but I have finally solved it. The Karma docs suggest that you can run the tests in Firefox with the --browsers flag (see <a href="https://karma-runner.github.io/0.8/plus/Travis-CI.html">https://karma-runner.github.io/0.8/plus/Travis-CI.html</a>). Travis has since updated so that Chrome can be loaded into the environment. For this to work, you&#39;ll need to make changes to your <code>travis.yml</code> file and your karma config file.</p>
<p><code>travis.yml</code></p>
<p>Note that I&#39;m using only latest node as that is the requirement for me</p>
<pre><code>  language: node_js
  node_js:
    \- &quot;node&quot;
  before_script:
    \- export CHROME_BIN=chromium-browser
    \- export DISPLAY=:99.0
    \- sh -e /etc/init.d/xvfb start
</code></pre>
<p>The before_script is the special part, which points travis in the right direction for running Chrome. The last two lines are addressed in the karma docs linked above. Personally, I am using a separate karma config file, and I want to make the changes within that config file to keep my test script clean. My test script is:</p>
<p><code>&quot;test&quot;: &quot;karma start karma.config.js&quot;</code></p>
<p><code>karma.config.js</code></p>
<pre><code class="language-javascript">const configuration = {
    files: [{ pattern: &#39;tests/**/**/**.*&#39;, watched: true }],
    customLaunchers: {
        chromeTravisCi: {
            base: &#39;Chrome&#39;,
            flags: [&#39;--no-sandbox&#39;]
        }
    },
    frameworks: [&#39;mocha&#39;],
    browsers: [&#39;Chrome&#39;],
    failOnEmptyTestSuite: true,
    singleRun: true
};

if (process.env.TRAVIS) {
    configuration.browsers = [&#39;chromeTravisCi&#39;];
}

module.exports = function(config) {
    config.set(configuration);
};
</code></pre>
<p>Luckily, Travis sets the process env to TRAVIS and if we check for this, we set the configuration browsers to [&#39;chromeTravisCi&#39;] which is defined in the customLaunchers. Have whatever pre-processors you need in the configuration object and it should work fine when you deploy.</p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[JSX in Vue.JS]]></title>
      <link>https://lukeboyle.com/blog/jsx-in-vue-js/</link>
      <guid>https://lukeboyle.com/blog/jsx-in-vue-js/</guid>
      <pubDate>Sun, 25 Sep 2016 00:00:00 GMT</pubDate>
      <description><![CDATA[JSX in Vue.JS]]></description>
      <content:encoded><![CDATA[<p>I&#39;ve recently been experimenting with using jsx in Vue, the Vue jsx plugin for babel and using that instead of the standard template pattern. Since there are really not any official docs for the plugin, I&#39;m going to run through a quick usage guide.</p>
<h3>Getting Started</h3>
<p>For my project I&#39;m using Webpack and just default npm scripts. Whatever your choice for build process the important part is what you have configured your babel config or .babelrc with.</p>
<pre><code>plugins: [
    &#39;transform-runtime&#39;,
    &#39;transform-vue-jsx&#39;
],
presets: [&#39;es2015&#39;]
</code></pre>
<p>That&#39;s the basic requirement for getting started. To install those, run:</p>
<ul>
<li><code>npm install -D babel-plugin-transform-runtime</code></li>
<li><code>npm install -D babel-plugin-transform-vue-jsx babel-helper-vue-jsx-merge-props babel-plugin-syntax-jsx</code></li>
<li><code>npm install -D babel-preset-es2015</code></li>
</ul>
<p>The official repo for the Vue jsx is located here: <a href="https://github.com/vuejs/babel-plugin-transform-vue-jsx">https://github.com/vuejs/babel-plugin-transform-vue-jsx</a> The interesting part about VueJsx in my opinion is that it follows the Angular pattern for registering components. Whereas in React you just import a function that returns jsx and you can name it whatever, in Vue jsx you must declare the name and register the component globally. Vue has a component method that takes a name and an object with all relevant data. The difference being is that instead of a <code>template</code> entry, there&#39;s a <code>render</code> function which returns jsx.</p>
<pre><code>Vue.component(&#39;jsx-example&#39;, {
  render (h) { // &lt;-- h must be in scope
    return &lt;div id=&quot;foo&quot;&gt;bar&lt;/div&gt;
  }
})

// Usage

&lt;div&gt;
    &lt;jsx-example/&gt;
&lt;/div&gt;
</code></pre>
<p><code>h</code> is the shorthand for the Vue instance $createElement method so you have to make sure that h is in the scope of your components, like so:</p>
<pre><code>const pageView = new Vue({
    el: &#39;#root&#39;,
    data: {},
    methods: {},
    render () {
        const h = this.$createElement;
        return (
            &lt;div&gt;
                &lt;jsx-example/&gt;
            &lt;/div&gt;
        )
    }
});
</code></pre>
<p>From the get go it seems to me like we&#39;ve lost some of the versatility that jsx provides by having to integrate it into the normal Vue component pattern.</p>
<pre><code>  return (
    &lt;div
      // event listeners are prefixed with on- or nativeOn-
      on-click={this.clickHandler}
      nativeOn-click={this.nativeClickHandler}
      key=&quot;key&quot;
      ref=&quot;ref&quot;&gt;
    &lt;/div&gt;
</code></pre>
<h3>Considerations</h3>
<p>There&#39;s a strange thing where on-change on a form input seems to be naturally debounced, and the <code>nativeOn-change</code> doesn&#39;t seem to be any different. The behaviour doesn&#39;t seem to be the same as the React class where you can refer to an element with <code>this.refs</code>, you need to use <code>this.$refs</code> which follows the usual Vue convention. Since there&#39;s no documentation surrounding the jsx, I&#39;m assuming the rest of the behaviour follows the standard Vue component pattern, but instead of a template, there&#39;s a <code>render</code> function. The jsx doesn&#39;t support the normal vue directives so you&#39;ll have to do any of those things programmatically.</p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[React Material-UI touch events not firing]]></title>
      <link>https://lukeboyle.com/blog/react-material-ui-touch-events-not-firing/</link>
      <guid>https://lukeboyle.com/blog/react-material-ui-touch-events-not-firing/</guid>
      <pubDate>Sat, 24 Sep 2016 00:00:00 GMT</pubDate>
      <description><![CDATA[React Material-UI touch events not firing]]></description>
      <content:encoded><![CDATA[<p><strong>This article is probably no longer relevant</strong></p>
<p>After much frustration with this issue, I found this section in the react material-ui documentation - React-Tap-Event-Plugin. The custom components like the select field don&#39;t work well with the traditional onClick listener, so as a temporary fix, the react-tap-event-plugin must be included in your react project. The dependency is supposedly a temporary fix. See the repo here: <a href="https://github.com/zilverline/react-tap-event-plugin">https://github.com/zilverline/react-tap-event-plugin</a></p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[Dynamic Product Filtering in Shopify]]></title>
      <link>https://lukeboyle.com/blog/dynamic-product-filtering-in-shopify/</link>
      <guid>https://lukeboyle.com/blog/dynamic-product-filtering-in-shopify/</guid>
      <pubDate>Thu, 11 Aug 2016 00:00:00 GMT</pubDate>
      <description><![CDATA[Dynamic Product Filtering in Shopify]]></description>
      <content:encoded><![CDATA[<blockquote>
<p>Disclaimer: Shopify is not good. I recommend steering clear and opting for one of many alternatives. It&#39;s an extremely closed platform that doesn&#39;t encourage innovation and naturally leans towards bad practice. Given this, if you still have to use it, read on.</p>
</blockquote>
<p>In Shopify, there is a native (albeit &#39;unsupported&#39;) filtering system. Native Filtering is based on the tags you specify on your product. If you go to your collection, you can link the user to a tag and Shopify can filter product with simple Javascript like so; collections/collection-name/tag-one/tag-two. Now given that in a collection you have access to collection.all_vendors and all_types, WHY OH WHY, is there not native filtering based on that. Filtering could EASILY be dynamic if Shopify cared enough to implement that. The &#39;official&#39; solution (as per the documentation; <a href="https://help.shopify.com/themes/customization/collections/filtering-a-collection-with-multiple-tag-drop-down">https://help.shopify.com/themes/customization/collections/filtering-a-collection-with-multiple-tag-drop-down</a>) is to make several drop downs and set tags to be a list of tags you want to allow filtering by (e.g. tags = &quot;red&quot;, &quot;blue&quot;, &quot;green&quot;). So next week when I add a yellow shirt I have to go back into the pits and add another tag? Not happening. This is how I make filters dynamic. After searching for hours, I can conclusively say that there is no open source solution for this, and given the constraints of the garbage liquid templating engine, I can confidently say that this is the least convoluted solution available. All it takes is implementing a rigid structure in your tagging system, so this is much easier on a new store. The tag structure is basically as such: category:tagName. Let&#39;s say you want to filter your products by brand. In your product page, on the tags section, enter brand:brandName. Same goes for <code>size:1</code> or <code>color:blue</code>. It&#39;s up to you how many you use, because I guarantee your collection sorting template is going to be a BIG file. The best part about all this is that there&#39;s no array filter or equivalent method in liquid, so we&#39;re going to have to do some crazy shit.</p>
<pre><code>{% for tag in collection.all_tags %} &lt;-- Start iterating over all tags
  {% if tag contains &#39;style&#39; %} &lt;-- Check if it contains your keyword
    {% capture raw\_style\_tags %} &lt;-- Initialise the variable \`raw\_style\_tags\`
      {{ raw\_style\_tags | append : tag | append: &#39;, &#39; }} &lt;-- Build a string of tags separated by commas
    {% endcapture %}
    {% assign style\_tags = raw\_style_tags | split: &#39;, &#39; %} &lt;-- Split the strings on the commas to build a new array
  {% endif %}
{% endfor %}
</code></pre>
<p>The variable <code>style_tags</code> is now an array of all tags including &#39;style:&#39;. Now, you will make a select field where the options are all of your style tags. Note that current_tags returns a list of the tags you are currently filtering by.</p>
<pre><code> Shop by style
  All
      {% for t in style_tags %}
        {% assign tag = t | strip %}
    {% if current_tags contains tag %} &lt;-- check if the tag is currently active - applies selected attribute
      {{ tag | remove: &#39;style:&#39; }}
    {% elsif product_tags contains tag %} &lt;-- else, just make it an option
      {{ tag | remove: &#39;style:&#39; }} &lt;\-\- use the remove filter to have just the tag name
    {% endif %}
      {% endfor %}
</code></pre>
<p>If you include the Javascript from the Shopify docs, it will automatically listen for changes to that .coll-filter. This way, if you ever add any more tags under the <code>style:</code> category, you won&#39;t have to update your view. And the best part is, you can just add a new category in your product page, copy paste those lines of code and change &#39;style&#39; to whatever your new category is called. I must reiterate, you should only use Shopify if you have no other choice. Cheers!</p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[Publishing React components to npm]]></title>
      <link>https://lukeboyle.com/blog/publishing-react-components-to-npm/</link>
      <guid>https://lukeboyle.com/blog/publishing-react-components-to-npm/</guid>
      <pubDate>Thu, 11 Aug 2016 00:00:00 GMT</pubDate>
      <description><![CDATA[Publishing React components to npm]]></description>
      <content:encoded><![CDATA[<p>Having built and published a few React components to npm, in keeping with the plug-n-play spirit of npm, I have what I believe to be a very simple implementation for both the development and installation of components. I published a boilerplate project to Git/npm and this is now my go-to whenever I need to put together an external component. <a href="https://www.npmjs.com/package/@lukeboyle/react-component-boilerplate">https://www.npmjs.com/package/@lukeboyle/react-component-boilerplate</a> The basic concept is that you have an index.jsx in a &#39;src&#39; folder. This should be transpiled to ES5 and output to the root directory called &#39;index.js&#39;. In this instance, index.js is the &quot;main&quot; in your package.json. You may notice the entry &quot;jsnext:main&quot; in the package which points to the jsx file. This convention was established by rollup (<a href="https://github.com/rollup/rollup/wiki/jsnext:main">https://github.com/rollup/rollup/wiki/jsnext:main</a>) as an entry point for ES6 modules. The idea is that when you bundle using Rollup (and the ES6 import/export syntax), your ES6 module will be used instead of the ES5 one. Given that we&#39;re still largely in the ES5 age, the rollup config generates an ES5 version (which is the main entry point) and an ES6 version in the src so you can feel free to write all the JSX goodness you please. The folder structure should roughly look like this:</p>
<p><code>project-root</code></p>
<pre><code>|--src
|  |--index.jsx
|--index.js
|--rollup.config.js (OR)
|--webpack.config.js
|--demo
|  |--dist
|     |--build files
|  |--src
|     |--src files
</code></pre>
<p><code>index.jsx</code></p>
<pre><code>import * as React from &#39;react&#39;;

export default function ReactComponent(props) {
    return &lt;div&gt;Job&#39;s Done&lt;/div&gt;;
}
</code></pre>
<p>Also, to play your part in improving our package ecosystem, consider namespacing your package for npm: <a href="http://blog.npmjs.org/post/116936804365/solving-npms-hard-problem-naming-packages">http://blog.npmjs.org/post/116936804365/solving-npms-hard-problem-naming-packages</a></p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[Agander 2.0 is now out]]></title>
      <link>https://lukeboyle.com/blog/agander-2-0-is-now-out/</link>
      <guid>https://lukeboyle.com/blog/agander-2-0-is-now-out/</guid>
      <pubDate>Tue, 07 Jun 2016 00:00:00 GMT</pubDate>
      <description><![CDATA[Agander 2.0 is now out]]></description>
      <content:encoded><![CDATA[<p>It&#39;s been about 2 and a half months since the first official full release of Agander went live, and it&#39;s out with the old in with the new.</p>
<h2>What&#39;s new?</h2>
<p>Outwardly, the changes are minimal. The most obvious change is that the add module dialogue is now a modal instead of a floating column element. Various styles have been optimised and reduced as much as possible so the button sizes specifically are more consistent across browsers.</p>
<h2>So why the new version?</h2>
<p>Around three quarters of the way through version 1 it became apparent that the app was outgrowing the constraints of the Vue system I had created, so the app has been rebuilt in React.js and Redux. <strong>The standard module model</strong> Using this model, every module has a content object and an event object under it. The content object handles calendar events, Asana workspaces and so on. Adhering to this model will allow for rapid development of new modules in future. <strong>Events</strong> The event system is simulated using the Redux middleware called Thunk. The base dispatch will set the event to executing and it will continue to execute until it is told to stop. If error is true, the event stops executing and and the error response is populated in the response key. Error false means the event resolved correctly and the response is the delicious events or tasks. React also makes rendering the correct component a breeze. I know to hide all content if the user hasn&#39;t authorised, and if the event is executing. Error messages are nice and simple too. <a href="https://youtu.be/T43RzjxwBys">https://youtu.be/T43RzjxwBys</a> <strong>Next Steps</strong> Agander is being temporarily put on hold to focus on other projects - but in its current state it is very much usable. Aside from bug fixes, there will be no new features for at least a couple months while I&#39;m working on other things. I&#39;m really happy with how far the app has come and I can finally use it for my own agenda tracking.</p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[Google Calendar API - ColorId]]></title>
      <link>https://lukeboyle.com/blog/google-calendar-api-color-id/</link>
      <guid>https://lukeboyle.com/blog/google-calendar-api-color-id/</guid>
      <pubDate>Wed, 20 Apr 2016 00:00:00 GMT</pubDate>
      <description><![CDATA[Google Calendar API - ColorId]]></description>
      <content:encoded><![CDATA[<p>When you request a Google Calendar event it will come with a colorId which is either undefined if user didn&#39;t select a colour, or between one and 11 if they did. Since I needed these for Agander, I decided to collate these for the curious. These are the corresponding colours used in the Google Calendar app.</p>
<p>Color ID</p>
<p>Color Name</p>
<p>Hex Code</p>
<p>undefined</p>
<p>Who knows</p>
<p>#039be5</p>
<p>1</p>
<p>Lavender</p>
<p>#7986cb</p>
<p>2</p>
<p>Sage</p>
<p>#33b679</p>
<p>3</p>
<p>Grape</p>
<p>#8e24aa</p>
<p>4</p>
<p>Flamingo</p>
<p>#e67c73</p>
<p>5</p>
<p>Banana</p>
<p>#f6c026</p>
<p>6</p>
<p>Tangerine</p>
<p>#f5511d</p>
<p>7</p>
<p>Peacock</p>
<p>#039be5</p>
<p>8</p>
<p>Graphite</p>
<p>#616161</p>
<p>9</p>
<p>Blueberry</p>
<p>#3f51b5</p>
<p>10</p>
<p>Basil</p>
<p>#0b8043</p>
<p>11</p>
<p>Tomato</p>
<p>#d60000</p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[Agander 1.0 is now out]]></title>
      <link>https://lukeboyle.com/blog/agander-1-0-is-now-out/</link>
      <guid>https://lukeboyle.com/blog/agander-1-0-is-now-out/</guid>
      <pubDate>Mon, 11 Apr 2016 00:00:00 GMT</pubDate>
      <description><![CDATA[Agander 1.0 is now out]]></description>
      <content:encoded><![CDATA[<p>Agander started in November 2015 with a vision to unify several of the productivity services I use. With Agander I could now have one tab where previously I had four or five. This post is fairly overdue, but I think it&#39;s worth taking the time to appreciate how far the project has come. While I did start in November, the biggest progress didn&#39;t start until January 2016. Working a 9-5 job and then coming home to work on Agander until 1AM has been a struggle, but the outcome is the true reward. As of Version 0.1 in December (with vaporware calendar) As of Version 1.0 on March 19th - Agander has now entered a brief period of refactoring and optimisation, after which point, the next set of integrations will be developed to create a more comprehensive platform.</p>
]]></content:encoded>
    </item>
    <item>
      <title><![CDATA[Google Task Javascript API - Invalid Value 400 Error]]></title>
      <link>https://lukeboyle.com/blog/google-task-javascript-api-invalid-value-400-error/</link>
      <guid>https://lukeboyle.com/blog/google-task-javascript-api-invalid-value-400-error/</guid>
      <pubDate>Sat, 19 Mar 2016 00:00:00 GMT</pubDate>
      <description><![CDATA[Google Task Javascript API - Invalid Value 400 Error]]></description>
      <content:encoded><![CDATA[<p>After a long battle with the weird Google Task Javascript API I&#39;ve established a module for <a href="http://agander.io">Agander</a> that has the ability to:</p>
<ul>
<li>Authorise a user</li>
<li>Display all tasks in a given tasklist</li>
<li>Complete a task</li>
</ul>
<p>Authorising the user and displaying their tasks is reasonably easy following the quickstart guide <a href="https://developers.google.com/google-apps/tasks/quickstart/js#prerequisites">here.</a> Essentially, requests are separated into two categories; either <code>tasks</code> or <code>tasklists</code>. When you have loaded the tasks api, you can see the basic structure and work from there. <a href="https://developers.google.com/google-apps/tasks/v1/reference/">API Reference for JS</a> To find the tasklists, you would use the list function (returns an array of tasklist objects).</p>
<pre><code>function listTaskLists(gAPI) {
    var request = gAPI.client.tasks.tasklists.list({
        &#39;maxResults&#39;: 10
    });
    request.execute();
}
</code></pre>
<p>Finding tasks in a given task list operates much the same way, however, you are dealing with Google here, so it&#39;s tasks.tasks.list... Basic parameters here would just be the tasklist you want to pull tasks from, however, there are other options.</p>
<pre><code>function getTasksByListId(gAPI, tasklistId) {
    var request = gAPI.client.tasks.tasks.list({
        &#39;tasklist&#39;: tasklistId
    })
        request.execute();
</code></pre>
<p>So, we&#39;ve covered getting the tasks, how do we manipulate it? That&#39;s where the tricky part comes in. The <code>gapi</code> client interactions we used before have an <code>update</code> method. However. Whenever I called update on anything, I got a 400 error with &#39;Invalid Value&#39;. This is a common issue I&#39;ve observed online with no real solutions. The gist of it is, that there is a bunch of &#39;required parameters&#39; for you to include in the request, but there is absolutely no documentation on this (thanks Google). To get around this, we found that it was simply easier to outright request it using the request method and giving it a url. The path parameter requires a tasklist Id, and a task id. This is basically the url that comes down with the getTasksByListId request. Make sure you define the method as PUT, and you pass the whole task object with your updated values to Google. In this instance, we are marking the task as &#39;completed&#39; and giving it a completed timestamp.</p>
<pre><code>function markTaskComplete(gAPI, task) {
    gAPI.client.request({
        path: &#39;https://www.googleapis.com/tasks/v1/lists/&#39; + tasklistId + &#39;/tasks/&#39; + task.id,
        method: &#39;PUT&#39;,
        body: Object.assign(
            {},
            task.originalTask,
            {
                completed: new Date().toISOString(),
                status: &#39;completed&#39;
            }
        )
    }).execute();
}
</code></pre>
<p>Now you have a basis, the world is your oyster.</p>
]]></content:encoded>
    </item>
  </channel>
</rss>
