<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Overflow - Buffer Resources]]></title><description><![CDATA[In-depth ideas and guides to social media & online marketing strategy, published by the team at Buffer]]></description><link>https://buffer.com/resources/</link><generator>Ghost 6.22</generator><lastBuildDate>Fri, 13 Mar 2026 22:34:12 GMT</lastBuildDate><atom:link href="https://buffer.com/resources/overflow/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[What We Learned After Finding 7 Forgotten Jobs Running for 5 Years]]></title><description><![CDATA[We found 7 forgotten cron jobs had been running for five years – here's how we fixed them and what we learned in the process.]]></description><link>https://buffer.com/resources/infrastructure-refactoring/</link><guid isPermaLink="false">69aec4d03765540001fa80b1</guid><category><![CDATA[Open]]></category><category><![CDATA[Overflow]]></category><dc:creator><![CDATA[Carlos Muñoz]]></dc:creator><pubDate>Fri, 13 Mar 2026 11:00:56 GMT</pubDate><media:content url="https://buffer.com/resources/content/images/2026/03/Infra-Refactor-Blog-Image.png" medium="image"/><content:encoded><![CDATA[<img src="https://buffer.com/resources/content/images/2026/03/Infra-Refactor-Blog-Image.png" alt="What We Learned After Finding 7 Forgotten Jobs Running for 5 Years"><p>We recently started a small project to clean up how parts of our systems communicate behind the scenes at Buffer.</p><p>Some quick context: we use something called SQS (Amazon Simple Queue Service. These queues act like waiting rooms for tasks. One part of our system drops off a message, and another picks it up later. Think of it like leaving a note for a coworker: &quot;Hey, when you get a chance, process this data.&quot; The system that sends the note doesn&apos;t have to wait around for a response.</p><p>Our project was to perform routine maintenance: update the tools we use to test queues locally and clean up their configuration.</p><p>But while we were mapping out what queues we actually use, we found something we didn&apos;t expect: seven different background processes (or cron jobs, which are scheduled tasks that run automatically) and workers that had been running silently for up to five years. All of them doing absolutely nothing useful.</p><p>Here&apos;s why that matters, how we found them, and what we did about it.</p><h2 id="why-this-matters-more-than-youd-think"><strong>Why this matters more than you&apos;d think</strong></h2><p>Yes, running unnecessary infrastructure costs money. I did a quick calculation and for one of those workers, we would have paid ~$360-600 over 5 years. This is a modest amount in the grand scheme of our finances, but definitely pure waste for a process that does nothing.</p><p>However, after going through this cleanup, I&apos;d argue the financial cost is actually the smallest part of the problem.</p><p>Every time a new engineer joins the team and explores our systems, they encounter these mysterious processes. &quot;What does this worker do?&quot; becomes a question that eats up onboarding time and creates uncertainty. We&apos;ve all been there &#x2014; staring at a piece of code, afraid to touch it because <em>maybe</em> it&apos;s doing something important.</p><p>Even &quot;forgotten&quot; infrastructure occasionally needs attention. Security updates, dependency bumps, compatibility fixes when something else changes. This led to our team spending maintenance cycles on code paths that served no purpose.</p><p>And over time, the institutional knowledge fades. Was this critical? Was it a temporary fix that became permanent? The person who created it left the company years ago, and the context left with them.</p><h2 id="how-does-this-even-happen"><strong>How does this even happen?</strong></h2><p>It&apos;s easy to point fingers, but the truth is this happens naturally in any long-lived system.</p><p>A feature gets deprecated, but the background job that supported it keeps running. Someone spins up a worker &quot;temporarily&quot; to handle a migration, and it never gets torn down. A scheduled task becomes redundant after an architectural change, but nobody thinks to check.</p><p>We used to send birthday celebration emails at Buffer. To do this, we ran a scheduled task that checked the entire database for birthdays matching the current date and sent customers a personalized email. During a refactor in 2020, we switched our transactional email tool but forgot to remove this worker&#x2014;it kept running for five more years.</p><p>None of these are failures of individuals &#x2014; they&apos;re failures of process. Without intentional cleanup built into how we work, entropy wins.</p><h2 id="how-our-architecture-helped-us-find-it"><strong>How our architecture helped us find it</strong></h2><p>Like many companies, Buffer embraced the microservices movement (a popular approach where companies split their code into many small, independent services) years ago.</p><p>We split our monolith into separate services, each with its own repository, deployment pipeline, and infrastructure. At the time, it made sense: each service could be deployed on its own, with clear boundaries between teams.</p><p>But over the years, we found the overhead of managing dozens of repositories outweighed the benefits for a team our size. So we consolidated into a multi-service single repository. The services still exist as logical boundaries, but they live together in one place.</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/2026/03/shared-repository.png" class="kg-image" alt="What We Learned After Finding 7 Forgotten Jobs Running for 5 Years" loading="lazy" width="2000" height="1091" srcset="https://buffer.com/resources/content/images/size/w600/2026/03/shared-repository.png 600w, https://buffer.com/resources/content/images/size/w1000/2026/03/shared-repository.png 1000w, https://buffer.com/resources/content/images/size/w1600/2026/03/shared-repository.png 1600w, https://buffer.com/resources/content/images/size/w2400/2026/03/shared-repository.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>This turned out to be what made discovery possible.</p><p>In the microservices world, each repository is its own island. A forgotten worker in one repo might never be noticed by engineers working in another. There&apos;s no single place to search for queue names, no unified view of what&apos;s running where.</p><p>With everything in one repository, we could finally see the full picture. We could trace every queue to its consumers and producers. We could spot queues with producers but no consumers. We could find workers referencing queues that no longer existed.</p><p>The consolidation wasn&apos;t designed to help us find zombie infrastructure &#x2014; but it made that discovery almost inevitable.</p><h2 id="what-we-actually-did"><strong>What we actually did</strong></h2><p>Once we identified the orphaned processes, we had to decide what to do with them. Here&apos;s how we approached it.</p><p>First, we traced each one to its origin. We dug through git history and old documentation to understand why each worker was created in the first place. In most cases, the original purpose was clear: a one-time data migration, a feature that got sunset, a temporary workaround that outlived its usefulness.</p><p>Then we confirmed they were truly unused. Before removing anything, we added logging to verify these processes weren&apos;t quietly doing something important we&apos;d missed. We monitored for a few days to make sure they were not called at all, and we removed them incrementally. We didn&apos;t delete everything at once. We removed processes one by one, watching for any unexpected side effects. (Luckily, there weren&apos;t any.)</p><p>Finally, we documented what we learned. We added notes to our internal docs about what each process had originally done and why it was removed, so future engineers wouldn&apos;t wonder if something important went missing.</p><h2 id="what-changed-after-clean-up"><strong>What changed after clean up</strong></h2><p>We&apos;re still early in measuring the full impact, but here&apos;s what we&apos;ve seen so far.</p><p>Our infrastructure inventory is now accurate. When someone asks, &quot;What workers do we run?&quot; we can actually answer that question with confidence.</p><p>Onboarding conversations have gotten simpler, too. New engineers aren&apos;t stumbling across mysterious processes and wondering if they&apos;re missing context. The codebase reflects what we actually do, not what we did five years ago.</p><h2 id="treat-refactors-as-archaeology-and-prevention"><strong>Treat refactors as archaeology and prevention</strong></h2><p>My biggest takeaway from this project: every significant refactor is an opportunity for archaeology.</p><p>When you&apos;re deep in a system, really understanding how the pieces connect, you&apos;re in the perfect position to question what&apos;s still needed. That queue from some old project? The worker someone created for a one-time data migration? The scheduled task that references a feature you&apos;ve never heard of? They might still be running.</p><p>Here&apos;s what we&apos;re building into our process going forward:</p><ul><li><strong>During any refactor</strong>, ask: what else touches this system that we haven&apos;t looked at in a while?</li><li><strong>When deprecating a feature</strong>, trace it all the way to its background processes, not just the user-facing code.</li><li><strong>When someone leaves the team</strong>, document what they were in charge of, especially the stuff that runs in the background.</li></ul><p>We still have older parts of our codebase that haven&apos;t been migrated to the single repository yet. As we continue consolidating, we&apos;re confident we&apos;ll find more of these hidden relics. But now we&apos;re set up to catch them and prevent new ones from forming.</p><p>When all your code lives in one place, orphaned infrastructure has nowhere to hide.</p>]]></content:encoded></item><item><title><![CDATA[Popcorn To Go: Our New Mobile Design System for iOS and Android]]></title><description><![CDATA[In this article, Daniel, a Senior Product Designer, breaks down the development and release of Buffer's new mobile design system, Popcorn to Go.]]></description><link>https://buffer.com/resources/popcorn-to-go/</link><guid isPermaLink="false">6936c7f7c8972100019263b4</guid><category><![CDATA[Open]]></category><category><![CDATA[Overflow]]></category><dc:creator><![CDATA[Daniel Parascandolo]]></dc:creator><pubDate>Fri, 12 Dec 2025 11:00:36 GMT</pubDate><media:content url="https://buffer.com/resources/content/images/2025/12/Popcorn-to-Go.png" medium="image"/><content:encoded><![CDATA[<img src="https://buffer.com/resources/content/images/2025/12/Popcorn-to-Go.png" alt="Popcorn To Go: Our New Mobile Design System for iOS and Android"><p>Delivering consistent mobile experiences is <em>hard</em>.</p><p>Between iOS and Android&apos;s distinct design languages, different versions of native components, and Buffer&apos;s own design language, mobile apps can sometimes feel fragmented. Designers and developers end up speaking different languages, duplicating work, and shipping experiences that feel inconsistent across platforms.</p><p>At Buffer, we really felt this friction. Our mobile design workflow wasn&apos;t as efficient as it could have been. We spent too much time reinventing the wheel, manually patching together screenshots, and playing catch-up with our web app counterpart. We knew we needed a better way.</p><p>So we built one.</p><h2 id="meet-%F0%9F%8D%BF-popcorn-to-go">Meet <strong>&#x1F37F; Popcorn To Go</strong></h2><h2 id></h2><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/2025/12/Popcorn-to-Go-Image.png" class="kg-image" alt="Popcorn To Go: Our New Mobile Design System for iOS and Android" loading="lazy" width="2000" height="1214" srcset="https://buffer.com/resources/content/images/size/w600/2025/12/Popcorn-to-Go-Image.png 600w, https://buffer.com/resources/content/images/size/w1000/2025/12/Popcorn-to-Go-Image.png 1000w, https://buffer.com/resources/content/images/size/w1600/2025/12/Popcorn-to-Go-Image.png 1600w, https://buffer.com/resources/content/images/size/w2400/2025/12/Popcorn-to-Go-Image.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>Buffer&apos;s new mobile design system for iOS and Android. It&apos;s our answer to the chaos, and it just passed its first major test: helping us ship our iOS app with Apple&apos;s new Liquid Glass design language the moment iOS 26 launched back in September 2025.</p><p>Let&apos;s dig in. &#x1F37F;</p><h3 id="why-we-built-it">Why we built it</h3><p>Before Popcorn To Go, our mobile development process had some painful friction points:</p><ul><li><strong>Miscommunication between design and engineering.</strong> Without a shared design language, handoffs were slow and error-prone. Our iOS app ended up with 300+ colors, most of which were slightly different shades of the same color. No source of truth existed.</li><li><strong>Design decisions made on the fly.</strong> With no source of truth, engineers were left to improvise and take on-the-fly design decisions to make things work.</li><li><strong>Inconsistent and inaccessible UI.</strong> Minor differences crept in between platforms, and even between different screens on the same platform. Our apps didn&apos;t feel as polished as they could be, and we weren&apos;t fully using the accessibility features built into native components.</li><li><strong>Dated look and feel.</strong> With all these things piling up, it became harder to adopt the latest native components or implement changes to Buffer&apos;s general look and feel.</li></ul><p>These problems started to hold us back. Our vision for Popcorn To Go was simple: create a system that delivers efficiency, consistency, accessibility, and future-proofing, without sacrificing the unique character and advantages that native components bring to a small mobile team like ours.</p><h3 id="the-goals-of-popcorn-to-go">The goals of Popcorn To Go</h3><p>We set out with four clear goals:</p><ol><li><strong>Efficiency for engineering and design teams</strong> through standardized components and smart use of native platform components.</li><li><strong>Unified design language</strong> that reduces miscommunication and speeds up iteration.</li><li><strong>Accessibility baked in</strong> by inheriting best practices from iOS and Android&apos;s native components.</li><li><strong>Readiness for platform evolution</strong>, like iOS 26&apos;s Liquid Glass, so we can move fast when the platforms do.</li></ol><h3 id="how-it-works">How it works</h3><p>At its core, Popcorn To Go is built on two key concepts: <strong>tokens</strong> and <strong>component kits</strong>.</p><p><strong>Tokens</strong> are the design decisions that define your visual language &#x2014; things like colors, spacing, typography, and border radii. Think of them as the ingredients in a recipe. Instead of hardcoding &quot;use brand green #8FC67D,&quot; we define a token like <code>fill-brand</code> that automatically adapts across light mode, dark mode, and different platforms. This means less chance of the wrong color being applied at any point.</p><p><strong>Component kits</strong> are pre-built UI building blocks (buttons, cards, navigation bars) that use those tokens. They live in Figma for designers and are implemented in code for engineers, creating a shared source of truth.</p><p>The tricky part? Balancing <strong>platform specificity</strong> with <strong>cross-platform consistency</strong>.</p><p>iOS and Android have their own design languages: Apple&apos;s <a href="https://www.google.com/search?q=Apple&apos;s+Human+Interface+Guidelines&amp;sourceid=chrome&amp;ie=UTF-8">Human Interface Guidelines</a> and Google&apos;s <a href="https://m3.material.io/">Material Design</a>. We didn&apos;t want to flatten everything into a generic &quot;lowest common denominator&quot; experience. Instead, Popcorn To Go respects each platform&apos;s native patterns while maintaining a cohesive Buffer feel.</p><p>This approach comes with a bonus: we get to use ready-made components that are stress-tested by the native platforms for accessibility and cross-device compatibility &#x2014; a huge asset for a two-person mobile engineering team.</p><p>Here&apos;s how we structured it in Figma:</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/2025/12/token-relationships-in-buffer-mobile-design-system.png" class="kg-image" alt="Popcorn To Go: Our New Mobile Design System for iOS and Android" loading="lazy" width="2000" height="1182" srcset="https://buffer.com/resources/content/images/size/w600/2025/12/token-relationships-in-buffer-mobile-design-system.png 600w, https://buffer.com/resources/content/images/size/w1000/2025/12/token-relationships-in-buffer-mobile-design-system.png 1000w, https://buffer.com/resources/content/images/size/w1600/2025/12/token-relationships-in-buffer-mobile-design-system.png 1600w, https://buffer.com/resources/content/images/size/w2400/2025/12/token-relationships-in-buffer-mobile-design-system.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>Token relationships between Figma files across the Web and Mobile design systems</p><ul><li><strong>Mobile/Styles</strong>: Our foundation layer with primitive colors and platform-specific tokens. We used Material 3 naming for Android and custom naming for Apple. The primitive colours mirror those in our web app.</li><li><strong>Mobile/Android M3</strong>: Components built with Google&apos;s Material 3 Expressive language, fully linked to our Android tokens.</li><li><strong>Mobile/iOS &amp; iPadOS 26</strong>: Native iOS 26 components using Apple&apos;s Liquid Glass design language linked to our Apple tokens.</li><li><strong>Mobile/iOS &amp; iPadOS 18</strong>: A lighter-touch kit for the previous iOS version (since we support one version back).</li><li><strong>Mobile/Custom Components</strong>: Buffer-specific components that don&apos;t exist natively on either platform.</li></ul><h3 id="design-operations-challenges-we-solved">Design operations challenges we solved</h3><p>Getting this system working smoothly meant tackling some gnarly design operations challenges:</p><ul><li><strong>Figma linking</strong>: The biggest challenge we faced was linking primitives. In an ideal world, the primitive colors would come directly from our main design system, Popcorn, and Popcorn To Go would simply map these to Android or Apple-specific tokens. However, Figma&apos;s current feature set doesn&apos;t support this. We had to create a new primitives file for Popcorn To Go that manually mirrors the web&apos;s primitives.</li></ul><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://buffer.com/resources/content/images/2025/12/Popcorn-to-Go-from-Daniel--4-.png" class="kg-image" alt="Popcorn To Go: Our New Mobile Design System for iOS and Android" loading="lazy" width="2000" height="1293" srcset="https://buffer.com/resources/content/images/size/w600/2025/12/Popcorn-to-Go-from-Daniel--4-.png 600w, https://buffer.com/resources/content/images/size/w1000/2025/12/Popcorn-to-Go-from-Daniel--4-.png 1000w, https://buffer.com/resources/content/images/size/w1600/2025/12/Popcorn-to-Go-from-Daniel--4-.png 1600w, https://buffer.com/resources/content/images/size/w2400/2025/12/Popcorn-to-Go-from-Daniel--4-.png 2400w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Mirroring of primitive Web tokens to Mobile tokens balance consistency with flexibility</span></figcaption></figure><ul><li><strong>Token naming</strong>: Creating a naming system across web, iOS, and Android that is somewhat streamlined whilst respecting platform-specific conventions.</li></ul><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://buffer.com/resources/content/images/2025/12/Popcorn-to-Go-from-Daniel--3-.png" class="kg-image" alt="Popcorn To Go: Our New Mobile Design System for iOS and Android" loading="lazy" width="2000" height="1296" srcset="https://buffer.com/resources/content/images/size/w600/2025/12/Popcorn-to-Go-from-Daniel--3-.png 600w, https://buffer.com/resources/content/images/size/w1000/2025/12/Popcorn-to-Go-from-Daniel--3-.png 1000w, https://buffer.com/resources/content/images/size/w1600/2025/12/Popcorn-to-Go-from-Daniel--3-.png 1600w, https://buffer.com/resources/content/images/size/w2400/2025/12/Popcorn-to-Go-from-Daniel--3-.png 2400w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Naming is hard!</span></figcaption></figure><ul><li><strong>Kit styling</strong>: Applying our tokens to platform-specific kits while maintaining flexibility for future updates. This required using several handy plugins like Figma Tokens and Variables Importer.</li></ul><p>Honestly, it&apos;s not the perfect, smoothly connected &amp; humming system every designer dreams of when setting up a design system.</p><p>Apple&apos;s component kits, in particular, are complex and sometimes inconsistent, whilst Android&apos;s token naming is very specific and tricky in its own way. But we landed on pragmatic solutions that work for everyday use and achieve the goals we set out to achieve.</p><h3 id="strategic-timing-the-ios-26-test">Strategic timing: The iOS 26 test</h3><p>We launched Popcorn To Go with intentional timing. iOS 26 was on the horizon, bringing Apple&apos;s new Liquid Glass design language: a fresh, modern aesthetic with frosted glass effects, refined animations, and elevated visual polish.</p><p>By building Popcorn To Go <em>before</em> iOS 26 launched, we positioned ourselves to:</p><ul><li><strong>Be ready from day one</strong> when iOS 26 dropped</li><li><strong>Leverage the latest platform capabilities</strong> immediately</li><li><strong>Ship our app&apos;s visual refresh</strong> alongside Apple&apos;s new design language for maximum impact.</li></ul><p>And it worked. When iOS 26 launched in September, we were ready. Our updated iOS app shipped with both Liquid Glass <em>and</em> Buffer&apos;s refreshed brand aesthetic, delivering a polished, modern experience that feels native to the platform while staying distinctly Buffer.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://buffer.com/resources/content/images/2025/12/Popcorn-to-Go-from-Daniel--5-.png" class="kg-image" alt="Popcorn To Go: Our New Mobile Design System for iOS and Android" loading="lazy" width="1494" height="1350" srcset="https://buffer.com/resources/content/images/size/w600/2025/12/Popcorn-to-Go-from-Daniel--5-.png 600w, https://buffer.com/resources/content/images/size/w1000/2025/12/Popcorn-to-Go-from-Daniel--5-.png 1000w, https://buffer.com/resources/content/images/2025/12/Popcorn-to-Go-from-Daniel--5-.png 1494w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Our iOS app embracing Liquid Glass</span></figcaption></figure><h3 id="whats-next">What&apos;s next</h3><p>Popcorn To Go is live and working, but we&apos;re just getting started. Here&apos;s what&apos;s on the roadmap:</p><p><strong>Short-term:</strong></p><ul><li>Applying to Android and refining based on feedback on both platforms.</li><li>Expanding token coverage beyond colors (spacing scales, border radii, typography scales).</li><li>Improving discoverability with better documentation.</li></ul><p><strong>Medium-term:</strong></p><ul><li>Building out our custom component library with Buffer-specific patterns.</li><li>Creating comprehensive usage guidelines for the system.</li><li>Evolving with platform updates as iOS and Android continue to iterate.</li></ul><p><strong>Long-term:</strong></p><ul><li>Keeping pace with platform evolution (iOS 27 and beyond, Material Design updates, etc.).</li><li>Exploring opportunities to bring learnings back to our web design system, Popcorn.</li></ul><h3 id="why-it-matters">Why it matters</h3><p>For our <strong>designers and engineers</strong>, Popcorn To Go means smoother collaboration, faster prototyping, and less time spent on repetitive work. Instead of getting stuck on which colour to use where, teams can focus on solving more complex problems and crafting better experiences.</p><p>For <strong>Buffer users</strong>, it means more polished, consistent, and accessible apps. When design systems work well, users might not consciously notice &#x2014; but they <em>feel</em> it. Interactions are smoother, the UI is more predictable, and everything just works better.</p><h2 id="raising-the-bar">Raising the bar</h2><p>Building Popcorn To Go wasn&apos;t just about solving today&apos;s problems but about setting ourselves up for the future.</p><p>Mobile platforms are constantly evolving. Design trends shift. User expectations rise. By investing in a solid foundation now, we&apos;re making it easier to keep pace, ship faster, and maintain quality as we grow.</p><p>This project was a true team effort: designers, iOS engineers, Android engineers, and product leaders all collaborating to make it happen. It&apos;s the kind of work that doesn&apos;t always get the spotlight, but it&apos;s what enables everything else we build.</p><p>We&apos;re proud of what we&apos;ve created, and we&apos;re excited to keep building on it. If you want to see Popcorn To Go in action, <a href="https://apps.apple.com/app/buffer/id490474324">download our iOS app</a> and check out the new Liquid Glass experience.</p><p>Not on an Apple device? Keep an eye out, Popcorn To Go is coming to Android soon!</p><p>Here&apos;s to smoother collaboration, better apps, and a little more consistency in the chaos. &#x1F37F;</p>]]></content:encoded></item><item><title><![CDATA[We Replaced SMS Authentication With Email and Authenticator Apps — Here's Why]]></title><description><![CDATA[Here’s why and how we replaced SMS authentication with email and authenticator apps.]]></description><link>https://buffer.com/resources/we-replaced-sms-authentication-with-email-and-authenticator-apps-heres-why/</link><guid isPermaLink="false">68dfb65294c0a2000175e8f7</guid><category><![CDATA[Open]]></category><category><![CDATA[Overflow]]></category><dc:creator><![CDATA[Carlos Muñoz]]></dc:creator><pubDate>Fri, 03 Oct 2025 11:45:31 GMT</pubDate><media:content url="https://buffer.com/resources/content/images/2025/10/sms-authentication-replaced.png" medium="image"/><content:encoded><![CDATA[<img src="https://buffer.com/resources/content/images/2025/10/sms-authentication-replaced.png" alt="We Replaced SMS Authentication With Email and Authenticator Apps &#x2014; Here&apos;s Why"><p>At Buffer, security has always been a balance: keeping our customers&#x2019; accounts safe while making login as seamless as possible for our global user base.</p><p>A few months ago, we made a decision that might sound surprising &#x2014; we removed SMS-based two-factor authentication (2FA) and moved fully to email-based verification.</p><p>It wasn&#x2019;t a change we took lightly. SMS has long been seen as the standard for 2FA. But over time, the drawbacks began to outweigh the benefits.</p><p>Here&#x2019;s the story of how we got there, what the transition looked like, and what we&#x2019;ve seen since.</p><h2 id="why-we-moved-away-from-sms">Why we moved away from SMS</h2><p>SMS-based 2FA has long been considered a security standard, but our team discovered several critical issues that made us reconsider:</p><h3 id="security-vulnerabilities-were-more-common-than-expected"><strong>Security vulnerabilities were more common than expected</strong></h3><p>SIM swapping attacks have become increasingly sophisticated, allowing attackers to hijack phone numbers and bypass SMS-based security.</p><p>Additionally, SMS messages travel unencrypted through multiple carriers, creating potential interception points.</p><h3 id="costs-were-scaling-unsustainably"><strong>Costs were scaling unsustainably</strong></h3><p>Every authentication SMS costs money, and with our growing user base, these seemingly small fees were adding up to hundreds of dollars monthly. International SMS rates made this even more challenging because our global user base.</p><h3 id="international-regulations-and-sender-id-requirements"><strong>International regulations and Sender ID requirements</strong></h3><p>SMS regulations vary dramatically by country, making compliance a constant challenge. Each country has different requirements for Sender IDs (the name that appears as the sender of an SMS), with some requiring pre-registration that can take weeks or months to complete.</p><p>For example, Singapore requires business verification documents, India demands a template pre-approval process, and the UAE has strict content restrictions.</p><p>Managing these requirements across 100+ countries created an enormous administrative burden that grew with each new regulation.</p><p>Additionally, failing to comply with any local regulation could result in messages being blocked, and ultimately customers being unable to log into Buffer.</p><h3 id="third-party-dependencies-created-failure-points"><strong>Third-party dependencies created failure points</strong></h3><p>We relied on SMS gateway providers that occasionally experienced outages, delivery delays, or rate-limiting issues.</p><p>When these services go down, our users can not access their accounts&#x2014;a critical problem for a tool that powers social media strategies worldwide.</p><h2 id="why-email-made-more-sense">Why email made more sense</h2><p>When we looked for alternatives, we realized we already had a stronger option: email.</p><p>So instead of just removing SMS and calling it a day, we reimagined our authentication flow by incorporating email as another venue.</p><p>We implemented time-limited, single-use verification codes sent via email with enhanced security headers and encryption. Our email infrastructure, which we already maintained for notifications and updates, proved more reliable than third-party SMS gateways.</p><p>We also added rate limiting and anomaly detection to prevent abuse.</p><h3 id="the-unexpected-benefits-of-switching-to-email">The unexpected benefits of switching to email</h3><p>The transition delivered improvements beyond our initial expectations:</p><ul><li><strong>Security actually improved.</strong> Email accounts typically have more robust security options than phone numbers, including their own 2FA, recovery options, and activity monitoring. Users maintain better control over their email accounts than their phone numbers, which can be transferred without their knowledge.</li><li><strong>Support tickets decreased.</strong> We saw a drop in authentication-related support requests. Users no longer struggled with international SMS delivery issues, changed phone numbers, or carrier-specific problems.</li><li><strong>Development velocity increased.</strong> Our engineering team no longer needs to maintain integrations with the SMS provider, debug delivery issues across different carriers, or handle country-specific SMS regulations.</li></ul><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/2025/10/CleanShot-2025-09-16-at-14.23.38@2x.png" class="kg-image" alt="We Replaced SMS Authentication With Email and Authenticator Apps &#x2014; Here&apos;s Why" loading="lazy" width="2000" height="1706" srcset="https://buffer.com/resources/content/images/size/w600/2025/10/CleanShot-2025-09-16-at-14.23.38@2x.png 600w, https://buffer.com/resources/content/images/size/w1000/2025/10/CleanShot-2025-09-16-at-14.23.38@2x.png 1000w, https://buffer.com/resources/content/images/size/w1600/2025/10/CleanShot-2025-09-16-at-14.23.38@2x.png 1600w, https://buffer.com/resources/content/images/size/w2400/2025/10/CleanShot-2025-09-16-at-14.23.38@2x.png 2400w" sizes="(min-width: 720px) 720px"></figure><h2 id="how-we-rolled-out-the-switch">How we rolled out the switch</h2><p>Making this transition required careful planning.</p><p>We communicated the change to users well in advance, explaining the security benefits and addressing concerns. We provided detailed migration guides and temporarily supported both methods during the transition period.</p><p>For users who strongly preferred SMS, we helped them understand that modern email security, especially with providers like Gmail or Outlook that offer robust protection, provides equal or better security than SMS.</p><p>We also enhanced our email delivery infrastructure to ensure reliability, implementing redundant email service providers and monitoring delivery rates closely.</p><h2 id="the-right-choice-for-buffer">The right choice for Buffer</h2><p>This decision won&apos;t be right for every company. Services that don&apos;t have users&apos; email addresses or that serve demographics with limited email access might need different solutions. However, for Buffer &#x2014; where every user already has an email account associated with their profile &#x2014; this change aligned perfectly with our needs.</p><p>Three months after the transition, the results speak for themselves: a reduction in authentication-related support tickets, and significant monthly savings that we&apos;ve reinvested in product improvements.</p><h2 id="looking-ahead">Looking ahead</h2><p>Removing SMS authentication initially felt like swimming against the current, but it forced us to think critically about security theater versus actual security. Sometimes the &quot;standard&quot; solution isn&apos;t the best solution for your specific context.</p><p>We&apos;re continuing to explore additional authentication options, including support for hardware security keys. But our email-first approach has proven that simpler can indeed be more secure.</p><hr><p><em>We share these kinds of stories because we know other teams face similar tradeoffs. Have you reconsidered a &#x201C;standard&#x201D; security practice recently? We&#x2019;d love to hear from you on our social media! Find us @buffer everywhere and </em><a href="https://www.linkedin.com/in/cmunozgar/"><em>follow Carlos on LinkedIn here</em></a><em>.</em></p>]]></content:encoded></item><item><title><![CDATA[How We're Preventing Breaking Changes in GraphQL APIs at Buffer — and Why It's Essential for Our Customers]]></title><description><![CDATA[As part of our commitment to transparency and building in public, Buffer engineer Joe Birch shares how we’re doing this for our own GraphQL API via the use of GitHub Actions.]]></description><link>https://buffer.com/resources/how-were-preventing-breaking-changes-in-graphql-apis-at-buffer-and-why-its-essential-for-our-customers/</link><guid isPermaLink="false">668fb55c3928a9000133f159</guid><category><![CDATA[Overflow]]></category><category><![CDATA[Open]]></category><dc:creator><![CDATA[Joe Birch]]></dc:creator><pubDate>Fri, 12 Jul 2024 11:28:34 GMT</pubDate><media:content url="https://buffer.com/resources/content/images/2024/07/Changes-in-GraphQL-APIs.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x26A1;</div><div class="kg-callout-text">At Buffer, we&#x2019;re committed to full transparency &#x2014; which means building in public and sharing how our engineers work. You&#x2019;ll find more content like this on our <a href="https://buffer.com/resources/overflow/">Overflow Blog here</a>.</div></div><img src="https://buffer.com/resources/content/images/2024/07/Changes-in-GraphQL-APIs.png" alt="How We&apos;re Preventing Breaking Changes in GraphQL APIs at Buffer &#x2014; and Why It&apos;s Essential for Our Customers"><p>We&#x2019;ve all experienced it at some point &#x2014; a change is deployed to an API and suddenly, clients stop working. </p><p>User experience deteriorates, negative reviews start coming in, customer advocacy starts dealing with requests, multiple engineers start digging into the issue, and it quickly becomes all hands on deck. </p><p>Not only does this lose the trust of our users and interrupt their workflows, but it also costs an organization a lot of time and money to resolve these issues.</p><p>One way to prevent all of this from ever happening is to detect these breaking changes before they are merged into our repository at a Pull Request stage. </p><p>This way, we can prevent such changes from ever being merged, avoiding a breaking experience for our clients and reducing downtime for our users.&#xA0;</p><p>As part of our commitment to transparency and building in public, I&#x2019;m going to share how we&#x2019;re doing this for our own GraphQL API via the use of GitHub Actions.</p><hr><p>When it comes to detecting breaking changes, we can detect these by taking the schema representation on the branch of the pull request and comparing it with the schema representation on the main branch.&#xA0;We can then use the result of this diff to determine whether breaking changes exist in our schema changes.</p><figure class="kg-card kg-image-card"><img src="https://paper-attachments.dropboxusercontent.com/s_D167A6326B8FE3E824DB286761B6D690FCE26409AE126FBC435D71AF47B05FC9_1720615094821_diag.png" class="kg-image" alt="How We&apos;re Preventing Breaking Changes in GraphQL APIs at Buffer &#x2014; and Why It&apos;s Essential for Our Customers" loading="lazy" width="2062" height="1064"></figure><p>When it comes to this workflow, we can break this down into several steps:</p><ul><li><strong>Generate schema for the current branch: </strong>This will give us a schema that represents the changes we have made in our pull request</li><li><strong>Generate schema for the main branch:</strong> This will give us a schema that represents our current production API</li><li><strong>Perform verification of the current branch schema against the main branch schema: T</strong>his will tell us what changes exist in our schema comparison and if any of them will break clients</li><li><strong>Post the result to the Pull Request:</strong> This will allow us to &#x2018;fail&#x2019; the pull request to prevent it from being merged, along with alerting the author of the breaking changes</li></ul><p>With this in mind, we&#x2019;re going to build out an automated workflow that will run these operations for any Pull Request in our repository. For these pull request checks, we&#x2019;re using GitHub Actions, but most of the following code will work for whatever CI setup you are using.</p><p><strong>Note</strong>: We won&#x2019;t be diving too much into the concepts of GitHub Actions here. If you are not familiar with Actions, I suggest following the <a href="https://docs.github.com/en/actions/quickstart" rel="noreferrer nofollow noopener">quickstart tutorial</a>.</p><hr><h2 id="setting-up-the-workflow">Setting up the workflow</h2><p>We&#x2019;re going to start by setting up a new GitHub Action, we&#x2019;ll create a new file named  <em>breaking_change_check.yml</em> and start by giving our Action a <em>name</em>.</p><p><code>name: Schema Change Verification</code></p><p>Next, we&#x2019;ll want to specify when this action is going to be run &#x2014; this will essentially allow us to define when we want to perform the checks in the PR.&#xA0;</p><p>We&#x2019;ll not only want to do this on <em>opened</em> events (when the PR is initially opened), but also if it is <em>reopened</em> or <em>synchronized</em> &#x2014; which will allow the checks the re-run if there are additional commits pushed to the pull request.&#xA0;</p><p>This ensures that we are always checking for breaking changes on the latest commits pushed to the branch.</p><pre><code class="language-javascript">name: Schema Change Verification:&#xA0; 
on:
  pull_request:&#xA0; &#xA0; 
    types:&#xA0; &#xA0; &#xA0; 
      - opened&#xA0; &#xA0; &#xA0; 
      - reopened&#xA0; &#xA0; &#xA0; 
      - synchronize</code></pre><p>We&#x2019;ll also only want to run these checks when <em>.graphql</em> schema files are changed, so we&#x2019;ll specify this rule using the <em>paths</em> property.</p><pre><code class="language-javascript">name: Schema Change Verification
on:
  pull_request:
    types:
      - opened
      - reopened
      - synchronize
    paths:
      - &apos;graphql/*/**.graphql&apos;</code></pre><p><em>graphql/*/**.graphql</em> is the path where our GraphQl schema files are located; you will need to adjust this according to your project.</p><hr><h2 id="generating-the-current-branch-schema">Generating the current branch schema</h2><p>Now that we have the foundations of our action configured, we can move on to defining the jobs that will be responsible for generating and verifying our schema.</p><p>We&#x2019;ll start here by defining a new job <em>generateChangedSchema</em> and defining the use of the <em>ubuntu-latest runner</em>.</p><pre><code class="language-javascript">name: Schema Change Verification
...

jobs:
  generateChangedSchema:
    runs-on: [ubuntu-latest]</code></pre><p>Next, we&#x2019;ll need to perform a couple of setup operations for our job. We&#x2019;ll want to start by checking out the repository at the branch of our PR, for which we&#x2019;ll use the <em>checkout</em> action. </p><p>Our action is also going to utilize <em>node</em>, so we&#x2019;ll need to install this using the <em>setup-node</em> action.</p><pre><code class="language-javascript">name: Schema Change Verification
...

jobs:
  generateChangedSchema:
    runs-on: [ubuntu-latest]
    steps:
      - uses: actions/checkout@v2
      - uses: actions/setup-node@v3
        with:
          node-version: &apos;18&apos;</code></pre><p>At this point, we&#x2019;re now ready to move on to the generation of our schema. For this, we&#x2019;re going to need to load our schema files and then merge them into a single schema file. This makes the verification process much simpler as we only need to work with a single file instead of multiple.</p><p>To make this process easier, we&#x2019;re going to utilize a couple of dependencies from <em>graphql-tools</em>. We can see that these are named according to what we need them for, we just need to add them to our <em>package.json</em> file as a dev-dependency.</p><pre><code class="language-javascript">&quot;@graphql-tools/load-files&quot;: &quot;7.0.0&quot;,
&quot;@graphql-tools/merge&quot;: &quot;9.0.1&quot;</code></pre><p>With these dependencies in place, we&#x2019;re now going to write a small script that will load and merge all of the <em>.graphql</em> files in a given directory.</p><pre><code class="language-javascript">
import * as fs from &apos;fs&apos;
import { loadFilesSync } from &apos;@graphql-tools/load-files&apos;
import { mergeTypeDefs } from &apos;@graphql-tools/merge&apos;
import { print } from &apos;graphql/language/printer&apos;

const mergeFiles = async (): Promise&lt;void&gt; =&gt; {
  try {
    // using the provided path, load the types from the schema files
    const typesArray = loadFilesSync(`../../graphql/src`, {
      extensions: [&apos;graphql&apos;],
    })
 
    // merge all of the types from the recieved schemas, compressing all of the types     // into a single place
    const result = mergeTypeDefs(typesArray)
    // write the schema to a single file to be used for diffing
    await fs.promises.writeFile(&apos;generated/schema.graphql&apos;, print(result))
  } catch (e) {
    console.error(&quot;We&apos;ve thrown! Whoops!&quot;, e)
  }
}

;(async (): Promise&lt;void&gt; =&gt; {
  try {
    // the merged schema file will be created in the generated directory, so create
    // the directory if it does not yet exist
    if (!fs.existsSync(&apos;generated&apos;)) {
      fs.mkdirSync(&apos;generated&apos;)
    }
    await mergeFiles()
  } catch (e) {
    console.error(&quot;We&apos;ve thrown an error! Whoops!&quot;, e)
  }
})()</code></pre><p>We&#x2019;ll then add a new [task] to our package.json file so that we can easily execute this with the required arguments. This script takes a single argument when being executed, which is the path for the merged schema to be saved.&#xA0;</p><p>If you need to provide paths for the location of schema files, this can be done through additional arguments. Our schemas are located in a single directory, so we hardcode this inside of the script itself.</p><p><code>&quot;graph:generateSchema&quot;: &quot;ts-node scripts/generateSchema.ts&quot;</code></p><p>With this in place, we can now execute this command from our <em>generateChangedSchema</em> job. For this, we&#x2019;ll use a bash script step where we&#x2019;ll need to navigate to the directory where the <em>generateSchema</em> command exists and then use the <em>node</em> command to execute it.</p><pre><code class="language-javascript">name: Schema Change Verification
...

jobs:
  generateChangedSchema:
    runs-on: [ubuntu-latest]
    steps:
      ...
      - name: Generate Schema
        run: |
          cd services/api-gateway
          node graph:generateSchema</code></pre><p>At this point, we will have a merged schema file that contains the contents of all our schema files. To wrap up this <em>job</em> we&#x2019;re going to attach this schema file to our workflow run, this is so that we can download that file for use within the next <em>job </em>in our workflow.&#xA0;</p><p>While this could all be done in a single <em>job,</em> your actions can be kept far more organized if work is broken down into smaller chunks. Here, we&#x2019;ll use the <em>upload-artifact</em> action and attach the schema file from the path that we saved it to, assigning it a name of <em>branch-schema</em> for referencing it later.</p><pre><code class="language-javascript">name: Schema Change Verification
...

jobs:
  generateChangedSchema:
    runs-on: [ubuntu-latest]
    steps:
      ...
      - name: Attach schema
        uses: actions/upload-artifact@v1
        with:
          name: branch-schema
          path: services/api-gateway/generated/schema.graphql</code></pre><p>At this point, we have created a merged schema file that represents all of the schemas in our project and attached this single file to our workflow, meaning that the step for generating the schema representation for the current branch is now complete.</p><hr><h2 id="verify-schema-changes">Verify schema changes</h2><p>Now that we have the schema for our branch generated, we&#x2019;re going to want to verify this against the schema representation on the main branch.&#xA0;</p><p>We&#x2019;ll start here by setting up the second job in our workflow, <em>performSchemaVerification</em>. We&#x2019;ll use the <em>needs</em> property to declare that this job will wait for the <em>generateChangedSchema</em> to complete successfully before running.</p><pre><code class="language-javascript">name: Schema Change Verification
...

jobs:
  generateChangedSchema:
    ...
  performSchemaVerification:
    runs-on: [ubuntu-latest]
    needs: generateChangedSchema</code></pre><p>Similar to the previous <em>job</em>, we&#x2019;ll configure some foundations for this <em>job</em> by checking out the repository and configuring node. The only difference here is that when triggering the <em>checkout</em> action we&#x2019;ll pass a <em>ref</em> property with the value of <em>main</em>.&#xA0;</p><p>This is because we need to generate the schema for the <em>main</em> branch for verification, so we need the <em>main</em> branch to be the current branch when checking out the repository.</p><pre><code class="language-javascript">name: Schema Change Verification
...

jobs:
  generateChangedSchema:
    ...
  performSchemaVerification:
    runs-on: [ubuntu-latest]
    needs: generateChangedSchema
    steps:
      - uses: actions/checkout@v2
        with:
          ref: main
      - uses: actions/setup-node@v3
        with:
          node-version: &apos;18&apos;</code></pre><p>Before we can verify schemas, we&#x2019;ll need to go ahead and download the schema that we generated in the last <em>job</em>.&#xA0;</p><p>For this we can use the <em>download-artifact</em> action, providing the <em>name</em> reference for the file that we want to download (which we previously defined as <em>branch-schema</em>), followed by the path that the schema should be downloaded to.</p><pre><code class="language-javascript">name: Schema Change Verification
...

jobs:
  generateChangedSchema:
    ...
  performSchemaVerification:
    runs-on: [ubuntu-latest]
    needs: generateChangedSchema
    steps:
      ...
      - name: Download branch schema
        uses: actions/download-artifact@v1
        with:
          name: branch-schema
          path: services/api-gateway/branch-schema</code></pre><p>And then so that we have a schema representation for our <em>main</em> branch to compare this to, we&#x2019;ll go ahead and execute our <em>generateSchema</em> command. This will do the same as before, except this time, we will have a single schema file that represents our <em>main</em> branch instead of the branch for our pull request.</p><pre><code class="language-javascript">name: Schema Change Verification
...

jobs:
  generateChangedSchema:
    ...
  performSchemaVerification:
    runs-on: [ubuntu-latest]
    needs: generateChangedSchema
    steps:
      ...
      - name: Generate main schema
        run: |
          cd services/api-gateway
          node  graph:generateSchema</code></pre><p>At this point, we have the two schema files that we need for the verification step. When it comes to the verification step, we&#x2019;re going to utilize <em>graphql-inspector</em>.&#xA0;</p><p>This tool contains a verification process that allows you to diff two schemas and returns you a result of the changes in that diff. We&#x2019;ll start here by adding these as dev-dependencies to our project.</p><pre><code class="language-javascript">&quot;@graphql-inspector/ci&quot;: &quot;^4.0.2&quot;,
&quot;@graphql-inspector/diff-command&quot;: &quot;^4.0.2&quot;</code></pre><p>Next, we&#x2019;ll add another command to our <em>package.json</em> file that we will use to execute this verification process. For this command we&#x2019;ll use <em>graphql-inspector</em> and its diff command, for which we&#x2019;ll need to provide two arguments.&#xA0;</p><p>The first is the schema path for the main branch, which we just generated in this step. The second is the schema path for the PR branch, which we previously downloaded and saved to the <em>branch-schema</em> path.&#xA0;</p><p>It&#x2019;s important here that your <em>main</em> schema is passed as the first argument, as this represents your base schema, while the second represents the changes your branch is introducing.</p><p><code>&quot;graph:verifySchema&quot;: &quot;graphql-inspector diff generated/schema.graphql branch-schema/schema.graphql&quot;,</code></p><p>With this command in place, we&#x2019;re now going to execute it in our <em>step</em>. We&#x2019;ll need the result of this command so that we can depict the breaking change state, so we&#x2019;ll save its output to a variable reference.</p><pre><code class="language-javascript">name: Schema Change Verification
...

jobs:
  generateChangedSchema:
    ...
  performSchemaVerification:
    runs-on: [ubuntu-latest]
    needs: generateChangedSchema
    steps:
      ...
      - name: Generate main schema
        run: |
          cd services/api-gateway
          node graph:generateSchema
          OUTPUT=$(node graph:verifySchema || true)</code></pre><p>When it comes to the result of the diff operation, there is a lot of content that is output to the console. In the context of a PR comment, the author is only going to care about the information in the context of their changes. For this reason, we&#x2019;re going to extract this information from the output so that we can use it for the PR comment.</p><p>The content that we wish to extract starts from &#x201C;Detected N breaking changes,&#x201D; so we&#x2019;ll use this to extract the text from the diff output.</p><pre><code class="language-javascript">name: Schema Change Verification
...

jobs:
  generateChangedSchema:
    ...
  performSchemaVerification:
    runs-on: [ubuntu-latest]
    needs: generateChangedSchema
    steps:
      ...
      - name: Generate main schema
        run: |
          cd services/api-gateway
          node graph:generateSchema
          OUTPUT=$(node graph:verifySchema || true)
          // grab everything after the Detected string
          CONTENT=${OUTPUT#*Detected}
          // reapply the Detected string at the start of the contnet
          FORMATTED=&quot;Detected&quot;$CONTENT
          // write the extracted content to an environment variable
          echo &quot;PR_COMMENT&lt;&lt;EOF&quot; &gt;&gt; $GITHUB_ENV
          echo &quot;$FORMATTED&quot; &gt;&gt; $GITHUB_ENV
          echo &quot;EOF&quot; &gt;&gt; $GITHUB_ENV</code></pre><p>At this point we now have the message for the comment stored in an environment variable, this will look something like the following, depending on the result of the diff.</p><pre><code class="language-javascript">Detected the following changes (1) between schemas:
[log] &#x2716; Input field aNewType of type String! was added to input object type OrganizationIdInput
[error] Detected 1 breaking change</code></pre><p>Now that we have this content, we&#x2019;re going to want to publish it as a comment on the Pull Request. For this we&#x2019;ll use the <em>create-or-update-comment</em> action, posting the content of the previously created environment variable.</p><pre><code class="language-javascript">name: Schema Change Verification
...

jobs:
  generateChangedSchema:
    ...
  performSchemaVerification:
    runs-on: [ubuntu-latest]
    needs: generateChangedSchema
    steps:
      ...
      - name: Create comment
        uses: peter-evans/create-or-update-comment@v1
        with:
          issue-number: ${{ github.event.pull_request.number }}
          body: ${{ env.PR_COMMENT }}
        if: ${{ !contains(env.PR_COMMENT, &apos;No changes detected&apos;) }}</code></pre><p>Now that we have the status of our breaking change, we&#x2019;re going to want to set the status of our PR check based on this. Here we&#x2019;re going to add a new step to our job, but only run this step if the generated comment does not contain the &#x2018;success&#x2019; label.&#xA0;</p><p>This is because by default, the Action will have the success status, it will only be otherwise if we manually mark it as failed (or it fails for some other reason). When this is the case, we&#x2019;ll utilise the <em>github-script</em> action to set the failed status for our check, along with a failure reason.&#xA0;</p><p>This way, the check will fail within the Pull Request and the author will be unable to merge the Pull Request until the issue is resolved.</p><pre><code class="language-javascript">name: Schema Change Verification
...

jobs:
  generateChangedSchema:
    ...
  performSchemaVerification:
    steps:
      ...
      - name: Set Breaking Change status
        if: ${{ !contains(env.PR_COMMENT, &apos;success&apos;) }}
        uses: actions/github-script@v3
        with:
          script: |
            core.setFailed(&apos;Schema Breaking Changes detected&apos;)</code></pre><hr><h2 id="wrapping-up">Wrapping up</h2><p>With all of the above in place, we will now be able to see the breaking change states published as comments on our Pull Request. For success states, engineers will be made aware of their schema changes, highlighting that no breaking changes were detected.</p><figure class="kg-card kg-image-card"><img src="https://paper-attachments.dropboxusercontent.com/s_D167A6326B8FE3E824DB286761B6D690FCE26409AE126FBC435D71AF47B05FC9_1720076347581_Screenshot+2024-07-04+at+07.58.25.png" class="kg-image" alt="How We&apos;re Preventing Breaking Changes in GraphQL APIs at Buffer &#x2014; and Why It&apos;s Essential for Our Customers" loading="lazy" width="2336" height="590"></figure><p>On the other hand, any breaking changes will be highlighted in the published comment.</p><figure class="kg-card kg-image-card"><img src="https://paper-attachments.dropboxusercontent.com/s_D167A6326B8FE3E824DB286761B6D690FCE26409AE126FBC435D71AF47B05FC9_1720076353237_Screenshot+2024-07-04+at+07.58.55.png" class="kg-image" alt="How We&apos;re Preventing Breaking Changes in GraphQL APIs at Buffer &#x2014; and Why It&apos;s Essential for Our Customers" loading="lazy" width="2356" height="638"></figure><p>When there is a breaking change, the check will be marked as a failure and the pull request will be unable to be merged.</p><figure class="kg-card kg-image-card"><img src="https://paper-attachments.dropboxusercontent.com/s_D167A6326B8FE3E824DB286761B6D690FCE26409AE126FBC435D71AF47B05FC9_1720077597797_Screenshot+2024-07-04+at+08.19.48.png" class="kg-image" alt="How We&apos;re Preventing Breaking Changes in GraphQL APIs at Buffer &#x2014; and Why It&apos;s Essential for Our Customers" loading="lazy" width="3792" height="1142"></figure><p>Now that we have breaking change checks in place, engineers will be unable to merge changes that will break the client experience. This allows to reduce any downtime for our users, increasing the trust in our product and reducing business cost from incident management.</p><p>If you&#x2019;re not already using breaking change checks in your CI, now is the time to get started! I&#x2019;d love to hear about any learnings you have along the way to making this a part of your development workflow. Comment below or <a href="https://www.linkedin.com/in/j-birch/">find me on LinkedIn</a>.&#xA0;</p>]]></content:encoded></item><item><title><![CDATA[Highlighting Text Input with Jetpack Compose]]></title><description><![CDATA[Learn more about the new feature at Buffer, called Ideas. With Ideas, you can store all your best ideas, tweak them until they’re ready and more.]]></description><link>https://buffer.com/resources/highlighting-text-input-with-jetpack-compose/</link><guid isPermaLink="false">63985b4383cbc3003d4d2f65</guid><category><![CDATA[Overflow]]></category><dc:creator><![CDATA[Joe Birch]]></dc:creator><pubDate>Tue, 13 Dec 2022 18:32:36 GMT</pubDate><media:content url="https://buffer.com/resources/content/images/2022/12/aaron-burden-Hzi7U2SZ2GE-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://buffer.com/resources/content/images/2022/12/aaron-burden-Hzi7U2SZ2GE-unsplash.jpg" alt="Highlighting Text Input with Jetpack Compose"><p>We recently launched a new feature at Buffer, called <a href="https://buffer.com/ideas" rel="noreferrer nofollow noopener">Ideas</a>. With Ideas, you can store all your best ideas, tweak them until they&#x2019;re ready, and drop them straight into your Buffer queue. Now that Ideas has launched in our web and mobile apps, we have some time to share some learnings from the development of this feature. In this blog post, we&#x2019;ll dive into how we added support for URL highlighting to the Ideas Composer on Android, using Jetpack Compose.</p><hr><p>We started adopting Jetpack Compose into our app in 2021 - using it as standard to build all our new features, while gradually adopting it into existing parts of our application. We built the whole of the Ideas feature using Jetpack Compose - so alongside faster feature development and greater predictability within the state of our UI, we had plenty of opportunities to further explore Compose and learn more about how to achieve certain requirements in our app.</p><p><br>Within the Ideas composer, we support dynamic link highlighting. This means that if you type a URL into the text area, then the link will be highlighted - tapping on this link will then show an &#x201C;Open link&#x201D; pop-up, which will launch the link in the browser when clicked.</p><figure class="kg-card kg-image-card"><img src="https://paper-attachments.dropboxusercontent.com/s_70CDC5BFE3F8A083A9AA9227345C1F454192F224108DA4223901379386BD986B_1670334086472_ezgif-1-1b6718525f.gif" class="kg-image" alt="Highlighting Text Input with Jetpack Compose" loading="lazy"></figure><p>In this blog post, we&#x2019;re going to focus on the link highlighting implementation and how this can be achieved in Jetpack Compose using the <code>TextField</code> composable.</p><hr><p>For the Ideas composer, we&#x2019;re utilising the <code>TextField</code> composable to support text entry. This composable contains an argument, <code>visualTransformation</code>, which is used to apply visual changes to the entered text.</p><pre><code>TextField(
    ...
    visualTransformation = ...
)</code></pre><p>This argument requires a <code>VisualTransformation</code> implementation which is used to apply the visual transformation to the entered text. If we look at the source code for this interface, we&#x2019;ll notice a filter function which takes the content of the TextField and returns a <code>TransformedText</code> reference that contains the modified text.</p><pre><code>@Immutable
fun interface VisualTransformation {
    fun filter(text: AnnotatedString): TransformedText
}</code></pre><p>When it comes to this modified text, we are required to provide the implementation that creates a new <code>AnnotatedString</code> reference with our applied changes. This changed content then gets bundled in the <code>TransformedText</code> type and returned back to the <code>TextField</code> for composition.</p><p><br>So that we can define and apply transformations to the content of our <code>TextField</code>, we need to start by creating a new implementation of the <code>VisualTransformation</code> interface for which we&#x2019;ll create a new class, <code>UrlTransformation</code>. This class will implement the <code>VisualTransformation</code> argument, along with taking a single argument in the form of a <code>Color</code>. We define this argument so that we can pass a theme color reference to be applied within our logic, as we are going to be outside of composable scope and won&#x2019;t have access to our composable theme.</p><pre><code>class UrlTransformation(
    val color: Color
) : VisualTransformation {

}</code></pre><p>With this class defined, we now need to implement the filter function from the <code>VisualTransformation</code> interface. Within this function we&#x2019;re going to return an instance of the <code>TransformedText</code> class - we can jump into the source code for this class and see that there are two properties required when instantiating this class.</p><pre><code>/**
 * The transformed text with offset offset mapping
 */
class TransformedText(
    /**
     * The transformed text
     */
    val text: AnnotatedString,

    /**
     * The map used for bidirectional offset mapping from original to transformed text.
     */
    val offsetMapping: OffsetMapping
)</code></pre><p>Both of these arguments are required, so we&#x2019;re going to need to provide a value for each when instantiating the <code>TransformedText</code> class.</p><ul><li><strong>text</strong> - this will be the modified version of the text that is provided to the filter function</li><li><strong>offsetMapping</strong> - as per the documentation, this is the map used for bidirectional offset mapping from original to transformed text</li></ul><pre><code>class UrlTransformation(
    val color: Color
) : VisualTransformation {
    override fun filter(text: AnnotatedString): TransformedText {
        return TransformedText(
            ...,
            OffsetMapping.Identity
        )
    }
}</code></pre><p>For the <code>offsetMapping</code> argument, we simply pass the <code>OffsetMapping.Identity</code> value - this is the predefined default value used for the <code>OffsetMapping</code> interface, used for when that can be used for the text transformation that does not change the character count. When it comes to the text argument we&#x2019;ll need to write some logic that will take the current content, apply the highlighting and return it as a new <code>AnnotatedString</code> reference to be passed into our <code>TransformedText</code> reference. For this logic, we&#x2019;re going to create a new function, <code>buildAnnotatedStringWithUrlHighlighting</code>. This is going to take two arguments - the text that is to be highlighted, along with the color to be used for the highlighting.</p><pre><code>fun buildAnnotatedStringWithUrlHighlighting(
    text: String, 
    color: Color
): AnnotatedString {
    
}</code></pre><p>From this function, we need to return an <code>AnnotatedString</code> reference, which we&#x2019;ll create using <code>buildAnnotatedString</code>. Within this function, we&#x2019;ll start by using the append operation to set the textual content of the <code>AnnotatedString</code>.</p><pre><code>fun buildAnnotatedStringWithUrlHighlighting(
    text: String, 
    color: Color
): AnnotatedString {
    return buildAnnotatedString {
        append(text)
    }
}</code></pre><p>Next, we&#x2019;ll need to take the contents of our string and apply highlighting to any URLs that are present. Before we can do this, we need to identify the URLs in the string. URL detection might vary depending on the use case, so to keep things simple let&#x2019;s write some example code that will find the URLs in a given piece of text. This code will take the given string and filter the URLs, providing a list of URL strings as the result.</p><pre><code>text?.split(&quot;\\s+&quot;.toRegex())?.filter { word -&gt;
    Patterns.WEB_URL.matcher(word).matches()
}</code></pre><p>Now that we know what URLs are in the string, we&#x2019;re going to need to apply highlighting to them. This is going to be in the form of an annotated string style, which is applied using the addStyle operation.</p><pre><code>fun addStyle(style: SpanStyle, start: Int, end: Int)</code></pre><p>When calling this function, we need to pass the <code>SpanStyle</code> that we wish to apply, along with the start and end index that this styling should be applied to. We&#x2019;re going to start by calculating this start and end index &#xA0;- to keep things simple, we&#x2019;re going to assume there are only unique URLs in our string.</p><pre><code>text?.split(&quot;\\s+&quot;.toRegex())?.filter { word -&gt;
    Patterns.WEB_URL.matcher(word).matches()
}.forEach {
    val startIndex = text.indexOf(it)
    val endIndex = startIndex + it.length
}</code></pre><p>Here we locate the start index by using the <code>indexOf</code> function, which will give us the starting index of the given URL. We&#x2019;ll then use this start index and the length of the URL to calculate the end index. We can then pass these values to the corresponding arguments for the <code>addStyle</code> function.</p><pre><code>text?.split(&quot;\\s+&quot;.toRegex())?.filter { word -&gt;
    Patterns.WEB_URL.matcher(word).matches()
}.forEach {
    val startIndex = text.indexOf(it)
    val endIndex = startIndex + it.length
    addStyle(
        start = startIndex, 
        end = endIndex
    )
}</code></pre><p>Next, we need to provide the <code>SpanStyle</code> that we want to be applied to the given index range. Here we want to simply highlight the text using the provided color, so we&#x2019;ll pass the color value from our function arguments as the color argument for the <code>SpanStyle</code> function.</p><!--kg-card-begin: markdown--><pre><code>text?.split(&quot;\\s+&quot;.toRegex())?.filter { word -&gt;
    Patterns.WEB_URL.matcher(word).matches()
}.forEach {
    val startIndex = text.indexOf(it)
    val endIndex = startIndex + it.length
    addStyle(
        style = SpanStyle(
            color = color
        ),
        start = startIndex, 
        end = endIndex
    )
}
</code></pre>
<!--kg-card-end: markdown--><p>With this in place, we now have a complete function that will take the provided text and highlight any URLs using the provided <code>Color</code> reference.</p><pre><code class="language-kotlin">fun buildAnnotatedStringWithUrlHighlighting(
    text: String, 
    color: Color
): AnnotatedString {
    return buildAnnotatedString {
        append(text)
        text?.split(&quot;\\s+&quot;.toRegex())?.filter { word -&gt;
            Patterns.WEB_URL.matcher(word).matches()
        }.forEach {
            val startIndex = text.indexOf(it)
            val endIndex = startIndex + it.length
            addStyle(
                style = SpanStyle(
                    color = color,
                    textDecoration = TextDecoration.None
                ),
                start = startIndex, end = endIndex
            )
        }
    }
}</code></pre><p>We&#x2019;ll then need to hop back into our <code>UrlTransformation</code> class and pass the result of the <code>buildAnnotatedStringWithUrlHighlighting</code> function call for the <code>TransformedText</code> argument.</p><pre><code>class UrlTransformation(
    val color: Color
) : VisualTransformation {
    override fun filter(text: AnnotatedString): TransformedText {
        return TransformedText(
            buildAnnotatedStringWithUrlHighlighting(text, color),
            OffsetMapping.Identity
        )
    }
}</code></pre><p>Now that our <code>UrlTransformation</code> implementation is complete, we can instantiate this and pass the reference for the <code>visualTransformation</code> &#xA0;argument of the <code>TextField</code> composable. Here we are using the desired color from our <code>MaterialTheme</code> reference, which will be used when highlighting the URLs in our <code>TextField</code> content.</p><pre><code>TextField(
    ...
    visualTransformation = UrlTransformation(
        MaterialTheme.colors.secondary)
)</code></pre><hr><p>With the above in place, we now have dynamic URL highlighting support within our <code>TextField</code> composable. This means that now whenever the user inserts a URL into the composer for an Idea, we identify this as a URL by highlighting it using a the secondary color from our theme.</p><figure class="kg-card kg-image-card"><img src="https://paper-attachments.dropboxusercontent.com/s_70CDC5BFE3F8A083A9AA9227345C1F454192F224108DA4223901379386BD986B_1670334086472_ezgif-1-1b6718525f.gif" class="kg-image" alt="Highlighting Text Input with Jetpack Compose" loading="lazy" width="600" height="259"></figure><p>In this post, we&#x2019;ve learnt how we can apply dynamic URL highlighting to the contents of a <code>TextField</code> composable. In the next post, we&#x2019;ll explore how we added the &#x201C;Open link&#x201D; pop-up when a URL is tapped within the composer input area.</p>]]></content:encoded></item><item><title><![CDATA[Secure Access To Opensearch on AWS]]></title><description><![CDATA[With the surprising swap of Elasticsearch with Opensearch on AWS. Learn how the team at Buffer achieved secure access without AWS credentials.]]></description><link>https://buffer.com/resources/secure-access-to-opensearch-on-aws/</link><guid isPermaLink="false">624f57a8042c01004d254c40</guid><category><![CDATA[Overflow]]></category><dc:creator><![CDATA[Peter Emil]]></dc:creator><pubDate>Mon, 18 Apr 2022 13:49:52 GMT</pubDate><media:content url="https://buffer.com/resources/content/images/2022/04/kari-shea-1SAnrIxw5OY-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://buffer.com/resources/content/images/2022/04/kari-shea-1SAnrIxw5OY-unsplash.jpg" alt="Secure Access To Opensearch on AWS"><p>At Buffer, we&#x2019;ve been working on a better admin dashboard for our customer advocacy team. This admin dashboard included a much more powerful search functionality. Nearing the end of the project&#x2019;s timeline, we&#x2019;ve been prompted with the replacement of managed Elasticsearch on AWS with managed Opensearch. Our project has been built on top of newer versions of the elasticsearch client which <a href="https://aws.amazon.com/blogs/opensource/keeping-clients-of-opensearch-and-elasticsearch-compatible-with-open-source/" rel="noreferrer nofollow noopener">suddenly didn&#x2019;t support</a> Opensearch.</p><p>To add more fuel to the fire, OpenSearch clients for the languages we use, did not yet support transparent AWS Sigv4 signatures. AWS Sigv4 signing is a requirement to authenticate to the OpenSearch cluster using AWS credentials.</p><p>This meant that the path forward was riddled with one of these options</p><ul><li>Leave our search cluster open to the world without authentication, then it would work with the OpenSearch client. Needless to say, this is a huge NO GO for obvious reasons.</li><li>Refactor our code to send raw HTTP requests and implement the AWS Sigv4 mechanism ourselves on these requests. This is infeasible, and we wouldn&#x2019;t want to reinvent a client library ourselves!</li><li>Build a plugin/middleware for the client that implements AWS Sigv4 signing. This would work at first, but Buffer is not a big team and with constant service upgrades, this is not something we can reliably maintain.</li><li>Switch our infrastructure to use an elasticsearch cluster hosted on Elastic&#x2019;s cloud. This entailed a huge amount of effort as we examined Elastic&#x2019;s Terms of Service, pricing, requirements for a secure networking setup and other time-expensive measures.</li></ul><p><br>It seemed like this project was stuck in it for the long haul! Or was it?</p><p>Looking at the situation, here are the constants we can&#x2019;t feasibly change.</p><ul><li>We can&#x2019;t use the elasticsearch client anymore.</li><li>Switching to the OpenSearch client would work if the cluster was open and required no authentication.</li><li>We can&#x2019;t leave the OpenSearch cluster open to the world for obvious reasons.</li></ul><p><br>Wouldn&#x2019;t it be nice if the OpenSearch cluster was open ONLY to the applications that need it?</p><p>If this can be accomplished, then those applications would be able to connect to the cluster without authentication allowing them to use the existing OpenSearch client, but for everything else, the cluster would be unreachable.<br>With that end goal in mind, we architected the following solution.</p><h2 id="piggybacking-off-our-recent-migration-from-self-managed-kubernetes-to-amazon-eks">Piggybacking off our recent migration from self-managed Kubernetes to Amazon EKS</h2><p>We recently migrated our computational infrastructure from a self-managed Kubernetes cluster to another cluster that&#x2019;s managed by Amazon EKS.<br>With this migration, we exchanged our container networking interface (CNI) from flannel to VPC CNI. This entails that we eliminated the overlay/underlay networks split and that all our pods were now getting VPC routable IP addresses.<br>This will become more relevant going forward.</p><h2 id="block-cluster-access-from-the-outside-world">Block cluster access from the outside world</h2><p>We created an OpenSearch cluster in a private VPC (no internet-facing IP addresses). This means the cluster&#x2019;s IP addresses would not be reachable over the internet but only to internal VPC routable IP addresses.<br>We added three security groups to the cluster to control which VPC IP addresses are allowed to reach the cluster.</p><h2 id="build-automations-to-control-what-is-allowed-to-access-the-cluster">Build automations to control what is allowed to access the cluster</h2><p>We built two automations running as AWS lambdas.</p><ul><li>Security Group Manager: This automation can execute two processes on-demand.</li><li>-&gt; Add an IP address to one of those three security groups (the one with the least number of rules at the time of addition).</li><li>-&gt; Remove an IP address everywhere it appears in those three security groups.</li><li>Pod Lifecycle Auditor: This automation runs on schedule and we&#x2019;ll get to what it does in a moment.</li></ul><h1 id="how-it-all-connects-together">How it all connects together</h1><p>We added an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer nofollow noopener">InitContainer</a> to all pods needing access to the OpenSearch cluster that, on-start, will execute the Security Group Manager automation and ask it to add the pod&#x2019;s IP address to one of the security groups. This allows it to reach the OpenSearch cluster.<br>In real life, things happen and pods get killed and they get new IP addresses.Therefore, on schedule, the Pod Lifecycle Auditor runs and checks all the whitelisted IP addresses in the three security groups that enable access to cluster. It then checks which IP addresses should not be there and reconciles the security groups by asking the Security Group Manager to remove those IP addresses. <br>Here is a diagram of how it all connects together</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://buffer.com/resources/content/images/2022/04/Buffer-s-Automation-for-OpenSearch---Page-2--2-.png" class="kg-image" alt="Secure Access To Opensearch on AWS" loading="lazy" width="2000" height="1287" srcset="https://buffer.com/resources/content/images/size/w600/2022/04/Buffer-s-Automation-for-OpenSearch---Page-2--2-.png 600w, https://buffer.com/resources/content/images/size/w1000/2022/04/Buffer-s-Automation-for-OpenSearch---Page-2--2-.png 1000w, https://buffer.com/resources/content/images/size/w1600/2022/04/Buffer-s-Automation-for-OpenSearch---Page-2--2-.png 1600w, https://buffer.com/resources/content/images/size/w2400/2022/04/Buffer-s-Automation-for-OpenSearch---Page-2--2-.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>Diagram for our solution to tackling Opensearch access problems through automated whitelisting, source: Peter Emil on behalf of Buffer&apos;s Infrastructure Team</figcaption></figure><h1 id="extra-gotchas">Extra Gotchas</h1><h3 id="why-did-we-create-three-security-groups-to-manage-access-to-the-opensearch-cluster">Why did we create three security groups to manage access to the OpenSearch cluster?</h3><p>Because security groups have a maximum limit of 50 ingress/egress rules. We anticipate that we won&#x2019;t have more than 70-90 pods at any given time needing access to the cluster. Having three security groups sets the limit at 150 rules which feels like a safe spot for us to start with.</p><h3 id="do-i-need-to-host-the-opensearch-cluster-in-the-same-vpc-as-the-eks-cluster">Do I need to host the Opensearch cluster in the same VPC as the EKS cluster?</h3><p>It depends on your networking setup! If your VPC has private subnets with NAT gateways, then you can host it in any VPC you like. If you don&#x2019;t have private subnets, you need to host both clusters in the same VPC because VPC CNI by default <a href="https://docs.aws.amazon.com/eks/latest/userguide/external-snat.html" rel="noreferrer nofollow noopener">NATs VPC-external pod traffic</a> to the hosting node&#x2019;s IP address which invalidates this solution. If you turn off the NAT configuration, then your pods can&#x2019;t reach the internet which is a bigger problem.</p><h3 id="if-a-pod-gets-stuck-in-crashloopbackoff-state-won%E2%80%99t-the-huge-volume-of-restarts-exhaust-the-150-rules-limit">If a pod gets stuck in CrashLoopBackoff state, won&#x2019;t the huge volume of restarts exhaust the 150 rules limit?</h3><p>No, because container crashes within a pod get restarted with the same IP address within the same pod. The IP Address isn&#x2019;t changed.</p><h3 id="aren%E2%80%99t-those-automations-a-single-point-of-failure">Aren&#x2019;t those automations a single-point-of-failure?</h3><p>Yes they are, which is why it&#x2019;s important to approach them with an SRE mindset. Adequate monitoring of these automations mixed with rolling deployments is crucial to having reliability here. Ever since these automations were instated, they&#x2019;ve been very stable and we didn&#x2019;t get any incidents. However, I sleep easy at night knowing that if one of them breaks for any reason I&#x2019;ll get notified way before it becomes a noticeable problem.</p><h1 id="conclusion">Conclusion</h1><p>I acknowledge that this solution isn&#x2019;t perfect but it was the quickest and easiest solution to implement without requiring continuous maintenance and without delving into the process of on-boarding a new cloud provider.</p><h2 id="over-to-you">Over to you</h2><p>What do you think of the approach we adopted here? Have you encountered similar situations in your organization? <a href="https://twitter.com/buffer" rel="noreferrer nofollow noopener">Send us a tweet!</a></p>]]></content:encoded></item><item><title><![CDATA[Load Fonts Fast]]></title><description><![CDATA[Learn the trick to fast fonts. This post shares how to load fonts fast.]]></description><link>https://buffer.com/resources/load-fonts-fast/</link><guid isPermaLink="false">61b226fd3370c9003b0cbb24</guid><category><![CDATA[Overflow]]></category><dc:creator><![CDATA[Juliana Gomez]]></dc:creator><pubDate>Thu, 09 Dec 2021 19:46:22 GMT</pubDate><media:content url="https://buffer.com/resources/content/images/2021/12/Frame-51.png" medium="image"/><content:encoded><![CDATA[<img src="https://buffer.com/resources/content/images/2021/12/Frame-51.png" alt="Load Fonts Fast"><p>At Buffer, we&#x2019;re constantly experimenting with ways we can improve our products and try out new ideas. We recently launched <a href="https://buffer.com/start-page">Start Page</a>, a beautiful, flexible, mobile-friendly landing page that you can build in minutes and update in seconds. As a Software Engineer on Buffer&#x2019;s team I&#x2019;ve tackled a long list of fun projects, including Start Page. One thing I love about this project, is that as we foray deeper and deeper into user-generated content and customization, we&#x2019;re discovering new engineering challenges that we haven&#x2019;t had in our frontends before. In this case, we wanted to introduce 13 new font options (for a total of 16 fonts) and we wanted to make sure that they loaded nice and quickly. As I worked on this, I learned so much I didn&#x2019;t know about fonts so in this post I want to share more about how we went about this for anyone facing similar challenges.</p><figure class="kg-card kg-embed-card kg-card-hascaption"><iframe src="https://share.buffer.com/RBuE2Q7B?embed=true" width="1679" height="986" style="border:none" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowtransparency="true" allowfullscreen="true"></iframe><figcaption>A screen capture of the Start Page app, demonstrating the new font picker functionality</figcaption></figure><h2 id="fonts-are-render-blocking">Fonts are render-blocking</h2><p>Let&#x2019;s start with the &#x2018;why&#x2019;. Fonts are generally pretty light resources, which are usually cached in browser so why is it important to ensure a quick loading strategy? Because fonts are high-priority, synchronous requests which means they&#x2019;re render-blocking. If we can load fonts quickly and/or asynchronously, we can improve site speed.</p><h2 id="fout-and-foit">FOUT and FOIT</h2><p>Ok, so you don&#x2019;t want to block your rendering, there are generally two strategies to chose from to handle text loaded before it&#x2019;s custom font:</p><p><strong>FOUT - Flash Of Unstyled Text</strong><br>Renders the text but with a fallback font. Google Fonts can now return with display=swap which instructs the browser to use the fallback font to display the text until the custom font has fully downloaded. If you want to be meticulous, you can find a better fallback font using this app: <a href="https://meowni.ca/font-style-matcher/" rel="noopener noreferrer">Font Style Matcher</a></p><p><strong>FOIT - Flash Of Invisible Text</strong><br>Here, the text is rendered with an invisible font until the custom font has fully downloaded. This one makes more sense to use for something like a logo where the brand would be affected if rendered with a fallback font (although for a logo I&#x2019;d use an SVG but examples!)</p><h2 id="the-trick-for-fast-fonts">THE trick for fast fonts</h2><p>The general advice nowadays is to preconnect to the font server:</p><pre><code>&lt;link rel=&quot;preconnect&quot; href=&quot;https://fonts.gstatic.com/&quot; crossorigin /&gt;
&lt;link rel=&quot;preconnect&quot; href=&quot;https://fonts.googleapis.com&quot; /&gt;</code></pre><p>then preload the fonts:</p><pre><code>&#xA0; &lt;link
&#xA0; &#xA0; &#xA0; rel=&quot;preload&quot;
&#xA0; &#xA0; &#xA0; as=&quot;style&quot;
&#xA0; &#xA0; &#xA0; href=&quot;https://fonts.googleapis.com/css2?family={your font families here}&amp;display=swap&quot;
&#xA0; &#xA0; /&gt;</code></pre><p>Finally as a fallback, request the fonts async by setting media to &#x201C;print&#x201D; for browsers which don&#x2019;t support <code>rel=&quot;preload&quot;</code> (about 12% of browsers in this the year 2021)</p><pre><code>&lt;link
&#xA0; &#xA0; &#xA0; rel=&quot;stylesheet&quot;
&#xA0; &#xA0; &#xA0; href=&quot;https://fonts.googleapis.com/css2?family={your font families here}&amp;display=swap&quot;
&#xA0; &#xA0; &#xA0; media=&quot;print&quot;
&#xA0; &#xA0; &#xA0; onload=&quot;this.media=&apos;all&apos;&quot;
&#xA0; &#xA0; /&gt;</code></pre><p>This works because a regular stylesheet is render-blocking but a print stylesheet is assigned idle priority. After it&#x2019;s loaded, the link&#x2019;s media is applied to all.</p><h3 id="hosting-your-own-fonts-is-the-fastest-but-google-fonts-does-a-lot-for-you">Hosting your own fonts is the fastest but Google Fonts does a lot for you:</h3><ul><li>Returns multiple alphabets</li><li>Returns a css file customized to the user agent that requested it</li><li>When you have multiple fonts, it&#x2019;s best to make 1 request so it&apos;s quicker</li><li>You can tailor your requests to target specific font-weights and formats (bold, italic, thin)</li></ul><h2 id="font-loading-api">Font Loading API</h2><p>There&#x2019;s a new-ish <a href="https://developer.mozilla.org/en-US/docs/Web/API/CSS_Font_Loading_API" rel="noopener noreferrer">CSS Font Loading API</a> that can request fonts on demand but I found that this doesn&#x2019;t play nice with Google Fonts because you need the source URL for the fonts and the Google Fonts URL that you get isn&#x2019;t the source, it&#x2019;s the request. Google, along with Typekit, does have a library called <a href="https://developers.google.com/fonts/docs/webfont_loader">Web Font Loader</a>, that works like the Font Loading API but plays better with Google Fonts.</p><h2 id="so-what-did-we-do-in-start-page">So what did we do in Start Page?</h2><p>We implemented the popular strategy for the builder (the app itself) and while we do have some FOUT on first load ever (remember browser caching!) it&#x2019;s very minimal, if seen at all. For generated pages, we get the fonts used in the theme before generating the HTML so we can inject only the fonts we need. This makes our generated pages much faster and lighter.We&#x2019;re excited to see how this experiment will play out and if folks are keen to get more font options. If that&#x2019;s the case, we might very well look into a more dynamic strategy (like loading only the currently used fonts on load and then sending another request if a user clicks on Appearance to change their fonts). Another option we could look into is implementing a way for requesting multiple fonts if we hosted them ourselves.<br><br>That&#x2019;s it for now! Thanks for making it this far, I hope this was interesting for you! Know anything neat about fonts that I didn&#x2019;t mention here? <a href="https://twitter.com/bufferdevs">Share it with us on Twitter.</a></p><p><em>Resources:</em><br><a href="https://csswizardry.com/2020/05/the-fastest-google-fonts/" rel="noopener noreferrer">The Fastest Google Fonts</a><br><a href="https://dev.to/masakudamatsu/loading-google-fonts-and-any-other-web-fonts-as-fast-as-possible-in-early-2021-4f5o" rel="noopener noreferrer">Loading Google Fonts and any other web fonts as fast as possible in early 2021</a><br><a href="https://rockcontent.com/blog/foit-vs-fout-comparison-webfont-loading/" rel="noopener noreferrer">FOIT vs FOUT: a comparison on web font loading</a><br><a href="https://css-tricks.com/almanac/properties/f/font-display/" rel="noopener noreferrer">CSS Tricks - font-display</a></p>]]></content:encoded></item><item><title><![CDATA[Migrating our component library to the Material Button]]></title><description><![CDATA[How we converted our Button styling to a Material Button component. This post is an overview on migrating our component library.]]></description><link>https://buffer.com/resources/migrating-our-component-library-to-the-material-button/</link><guid isPermaLink="false">5e991eb04280f300389c6b69</guid><category><![CDATA[Overflow]]></category><dc:creator><![CDATA[Joe Birch]]></dc:creator><pubDate>Fri, 14 Feb 2020 15:52:40 GMT</pubDate><media:content url="https://buffer.com/resources/content/images/wp-content/uploads/2020/02/pearse-o-halloran-mrbDuwF9gqk-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: html--><img src="https://buffer.com/resources/content/images/wp-content/uploads/2020/02/pearse-o-halloran-mrbDuwF9gqk-unsplash.jpg" alt="Migrating our component library to the Material Button"><p style="text-align:center">Header Photo by&#xA0;<a href="https://unsplash.com/@pearseoh?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Pearse O&#x2019;Halloran</a>&#xA0;on&#xA0;<a href="https://unsplash.com/s/photos/button?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p><!--kg-card-end: html--><p>For our Android clients we have a <a href="https://github.com/bufferapp/android-components">small component library</a> which is used to shared common visual elements across the different Android applications that we work on. We recently updated our applications to the Material Components library, meaning that our component library itself needed to go through the same transition.</p><p>Within this component library we have a custom button component &#x2013; the button is styled to suit the design system at buffer and having this as a custom component allows us to easily toggle between the different custom attributes that it provides. The button comes in three different states:</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2020/02/buttons-1024x699.png" class="kg-image" alt="Migrating our component library to the Material Button" loading="lazy" width="1024" height="699" srcset="https://buffer.com/resources/content/images/size/w600/wp-content/uploads/2020/02/buttons-1024x699.png 600w, https://buffer.com/resources/content/images/size/w1000/wp-content/uploads/2020/02/buttons-1024x699.png 1000w, https://buffer.com/resources/content/images/wp-content/uploads/2020/02/buttons-1024x699.png 1024w" sizes="(min-width: 720px) 720px"></figure><p>In order to achieve these states there was actually quite a bit of code &#x2013; each button has it&#x2019;s own selector state for both the text color and background, meaning that we ended up with something like this in our resources:</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2020/02/Screenshot-2020-02-13-at-05.46.48.png" class="kg-image" alt="Migrating our component library to the Material Button" loading="lazy" width="769" height="722" srcset="https://buffer.com/resources/content/images/size/w600/wp-content/uploads/2020/02/Screenshot-2020-02-13-at-05.46.48.png 600w, https://buffer.com/resources/content/images/wp-content/uploads/2020/02/Screenshot-2020-02-13-at-05.46.48.png 769w" sizes="(min-width: 720px) 720px"></figure><p>When it comes to these buttons and their corresponding files, each text selector defines the text color states for each button type:</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2020/02/Screenshot-2020-02-14-at-15.47.27-1024x176.png" class="kg-image" alt="Migrating our component library to the Material Button" loading="lazy" width="1024" height="176" srcset="https://buffer.com/resources/content/images/size/w600/wp-content/uploads/2020/02/Screenshot-2020-02-14-at-15.47.27-1024x176.png 600w, https://buffer.com/resources/content/images/size/w1000/wp-content/uploads/2020/02/Screenshot-2020-02-14-at-15.47.27-1024x176.png 1000w, https://buffer.com/resources/content/images/wp-content/uploads/2020/02/Screenshot-2020-02-14-at-15.47.27-1024x176.png 1024w" sizes="(min-width: 720px) 720px"></figure><p>We then have a similar selector but for the <a href="https://github.com/bufferapp/android-components/blob/aa27cf48c3273c925e8b5efff0711120ab9f9591/app/src/main/res/drawable/button_bfr_round_light.xml">background of each button type</a>. Finally, each of these button states then has a shape drawable to create the rounded corner background for the button.</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2020/02/Screenshot-2020-02-14-at-15.49.03.png" class="kg-image" alt="Migrating our component library to the Material Button" loading="lazy" width="955" height="235" srcset="https://buffer.com/resources/content/images/size/w600/wp-content/uploads/2020/02/Screenshot-2020-02-14-at-15.49.03.png 600w, https://buffer.com/resources/content/images/wp-content/uploads/2020/02/Screenshot-2020-02-14-at-15.49.03.png 955w" sizes="(min-width: 720px) 720px"></figure><p>Whilst this works&#x2026; <strong>a)</strong> there&#x2019;s a lot of code here for something that in appearance doesn&#x2019;t look so complex and <strong>b)</strong> this isn&#x2019;t quite going to work when we migrate to use the MaterialButton component.</p><hr><p>When it comes to the MaterialButton component, the way we style the button can become a little different. Where we previously had all of the different background selectors that defined not only the background color but the shape, we can now achieve this in a centralised style.</p><p>To begin with, this means that we can begin by changing our background selector so that some simple color references are used. If we want to do this in XML, then we can do this <a href="https://github.com/bufferapp/android-components/blob/master/app/src/main/res/color/selector_light_button_background.xml">the following way</a>:</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2020/02/Screenshot-2020-02-14-at-15.51.03.png" class="kg-image" alt="Migrating our component library to the Material Button" loading="lazy" width="1012" height="554" srcset="https://buffer.com/resources/content/images/size/w600/wp-content/uploads/2020/02/Screenshot-2020-02-14-at-15.51.03.png 600w, https://buffer.com/resources/content/images/size/w1000/wp-content/uploads/2020/02/Screenshot-2020-02-14-at-15.51.03.png 1000w, https://buffer.com/resources/content/images/wp-content/uploads/2020/02/Screenshot-2020-02-14-at-15.51.03.png 1012w" sizes="(min-width: 720px) 720px"></figure><p>This selector works the same way as the previous, however now we&#x2019;re not dealing with the different shapes within our xml file &#x2013; meaning that we end up with a lot less files to represent how our button looks:</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2020/02/Screenshot-2020-02-13-at-05.49.29.png" class="kg-image" alt="Migrating our component library to the Material Button" loading="lazy" width="719" height="308" srcset="https://buffer.com/resources/content/images/size/w600/wp-content/uploads/2020/02/Screenshot-2020-02-13-at-05.49.29.png 600w, https://buffer.com/resources/content/images/wp-content/uploads/2020/02/Screenshot-2020-02-13-at-05.49.29.png 719w"></figure><p>For each of these color / background selectors we can then set them within our custom MaterialButton component. We set these depending on the <a href="https://github.com/bufferapp/android-components/blob/master/app/src/main/java/org/buffer/android/components/RoundedButton.kt#L22">custom attribute passed</a>:</p><!--kg-card-begin: html--><pre><code>setTextColor(ContextCompat.getColorStateList(context,
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;R.color.selector_light_button_text))
backgroundTintList = ContextCompat.getColorStateList(context,
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;R.color.selector_light_button_background)</code></pre><!--kg-card-end: html--><p>So that&#x2019;s the text color and background color sorted, but what about the styling of the button? We previously had all of those different background selectors so that we could modify the shape, however, the styling for the MaterialButton allows us to achieve this directly through the <strong>Widget.MaterialComponents.Button</strong> style.</p><p>To make use of this we&#x2019;re going to <a href="https://github.com/bufferapp/android-components/blob/master/app/src/main/res/values/styles.xml">create our own style</a>, <strong>RoundedButtonStyle</strong>, and define some properties for the <strong>shapeAppearance</strong> attribute:</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2020/02/Screenshot-2020-02-14-at-15.51.55-1024x451.png" class="kg-image" alt="Migrating our component library to the Material Button" loading="lazy" width="1024" height="451" srcset="https://buffer.com/resources/content/images/size/w600/wp-content/uploads/2020/02/Screenshot-2020-02-14-at-15.51.55-1024x451.png 600w, https://buffer.com/resources/content/images/size/w1000/wp-content/uploads/2020/02/Screenshot-2020-02-14-at-15.51.55-1024x451.png 1000w, https://buffer.com/resources/content/images/wp-content/uploads/2020/02/Screenshot-2020-02-14-at-15.51.55-1024x451.png 1024w" sizes="(min-width: 720px) 720px"></figure><p>You&#x2019;ll notice here that for each corner we are defining a cornerFamily attribute, this defines how the corner of the button is rendered &#x2013; currently this can be either <strong>cut</strong> or <strong>rounded</strong>. We won&#x2019;t get too much into this here, but rounded gives us the same rounded corner effect that we previously had in place &#x2013; just with much less code. We then use the <strong>cornerSize</strong> attribute to define the radius used for the rounded corner.</p><p>Once we set the style for our button, our button has the intended look and feel, now with the consistency of the rest of our application with the use of the material component library.</p><hr><p>In this post we&#x2019;ve taken a quick look at how we can take our existing Button styling and convert it to a Material Button component. Now with this in place, we have greater consistency throughout our app when it comes to buttons ? Have you worked with something similar recently or looking to make the jump to material components also? Feel free to reach out in the comments below if so ?</p>]]></content:encoded></item><item><title><![CDATA[Selectively running Android modularized unit tests on your CI server]]></title><description><![CDATA[Modularizing projects can bring different advantages to your team. This post is an overview on running modularized unit test on CI server.]]></description><link>https://buffer.com/resources/selectively-running-android-modularized-unit-tests-on-your-ci-server/</link><guid isPermaLink="false">5e991eb04280f300389c6b6a</guid><category><![CDATA[Overflow]]></category><dc:creator><![CDATA[Joe Birch]]></dc:creator><pubDate>Fri, 20 Dec 2019 14:10:22 GMT</pubDate><media:content url="https://buffer.com/resources/content/images/wp-content/uploads/2019/12/icons8-team-dhZtNlvNE8M-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://buffer.com/resources/content/images/wp-content/uploads/2019/12/icons8-team-dhZtNlvNE8M-unsplash.jpg" alt="Selectively running Android modularized unit tests on your CI server"><p>Header Photo by <a href="https://unsplash.com/@icons8?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Icons8 Team</a> on <a href="https://unsplash.com/s/photos/time?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p><hr><p>Modularizing your Android projects can bring a number of different advantages to your team. Some of these include reduced build times, a greater separation of concerns and the ability to reuse components throughout our applications. As we started to get more and more modules in our projects, I started to think more about how these were being run on our CI server. For example, we open a pull request and for that changed code <strong>all</strong> of our tests and checks are run for the entire project. When you have a couple of modules you probably won&#x2019;t see a concern here. But what if we have 30 modules, each with plenty of code / tests, and we open a pull request that only makes changes to one of those modules? In this article I want to share how we&#x2019;ve made some additions to our CI to help here!</p><hr><p>We&#x2019;ve been trying to reduce our CI times recently, so with our modularisation this seemed like a good place to start looking. We have unit tests in every feature module in our application and we have around 20 modules currently. Whilst unit tests don&#x2019;t take <strong>too</strong> long to run, being able to shave some time off of each build that occurs will add up over the days, weeks and months that our CI is building our tasks. With restricted concurrent builds on our current CI plan, that saved time helps to free up our CI server quicker, keeping us more productive in our work.</p><p>Unfortunately, there&#x2019;s no magic way to detect what modules have changes to them and only run the tests for those modules. In Android we can either run a gradle <strong>test</strong> task from the root of our project, or individually for each of the modules in our project. Even on our CI server (bitrise) the test step takes a single test command, which by default uses the <strong>test</strong> task from the root of the project. When it comes to running unit tests via gradle, we can however provide a list of test commands to run during our test task, for example:</p><!--kg-card-begin: html--><pre><code>./gradlew :moduleA:testDebugUnitTest :moduleB:testDebugUnitTest</code></pre><!--kg-card-end: html--><p>That would solve all of our problems when it comes to running our unit tests, but how do we get there? There are a couple of things that we need to do in order to build our test commands dynamically.</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2019/12/flow-1024x209.png" class="kg-image" alt="Selectively running Android modularized unit tests on your CI server" loading="lazy" width="1024" height="209" srcset="https://buffer.com/resources/content/images/size/w600/wp-content/uploads/2019/12/flow-1024x209.png 600w, https://buffer.com/resources/content/images/size/w1000/wp-content/uploads/2019/12/flow-1024x209.png 1000w, https://buffer.com/resources/content/images/wp-content/uploads/2019/12/flow-1024x209.png 1024w" sizes="(min-width: 720px) 720px"></figure><hr><p>We need to begin by detecting the modules in our code that have changed files in them. Again, there&#x2019;s no straightforward way to detect this from within the CI server &#x2013; so we&#x2019;re going to need to perform some git diffing and calculate the changed modules using those diffs. This is going to look something like so:</p><!--kg-card-begin: html--><pre><code>dest=origin/_branch_merging_into_
branch=origin/_branch_for_pull_request_

changed_modules=&quot;&quot;

git diff --name-only $dest..$branch | { while read line
&#xA0;&#xA0;&#xA0;&#xA0;do
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;module_name=${line%%/*}

&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;if [[ ${module_name} != &quot;buildSrc&quot; &amp;amp;&amp;amp; 
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;${changed_modules} != *&quot;$module_name&quot;* ]]; then 
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;changed_modules=&quot;${test_modules} ${module_name}&quot;
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;fi
&#xA0;&#xA0;&#xA0;&#xA0;done
}</code></pre><!--kg-card-end: html--><p>We need to start by retrieving the destination that our branch is being merged into, along with the actual branch for the pull request that has been opened. You can&#x2019;t hardcode these as everytime you open a pull request this code is going to be run. On bitrise you can access environment variables to get these values:</p><!--kg-card-begin: html--><pre><code>dest=origin/$BITRISEIO_GIT_BRANCH_DEST
branch=origin/$BITRISE_GIT_BRANCH</code></pre><!--kg-card-end: html--><p>Next we&#x2019;re going to perform the git diff operation against these two branches. From the code above, the section below takes our two branches and loops through each line that is presented in the diff. However, we don&#x2019;t care for the actual diff content, we only want the names of the files that have changes. Using the <strong>&#x2013;name-only</strong> command when performing the diff means that we be presented only with the file names, instead of the file diff content.</p><!--kg-card-begin: html--><pre><code>git diff --name-only $dest..$branch | { while read line
&#xA0;&#xA0;&#xA0;&#xA0;do

&#xA0;&#xA0;&#xA0;&#xA0;done
}</code></pre><!--kg-card-end: html--><p>Now that we have our changed files, we need to pull out the module name from each of them. In the line below, we pull out the string content up until the first forward slash character.</p><!--kg-card-begin: html--><pre><code>module_name=${line%%/*}</code></pre><!--kg-card-end: html--><p>To note, this isn&#x2019;t a sure fire way of getting the module name as it can yield unexpected values. For example, if we change a gradle or text file in the root of our project which isn&#x2019;t within a module, then this module_name variable could be assigned with something that doesn&#x2019;t represent a module. The same goes for modules that we have deleted &#x2013; whilst these would appear in the diff, we wouldn&#x2019;t want to run the tests for them as the module no longer exists. The next script that we write will handle this, for now we just want to get a list of everything that has changed. This way, we can also re-use this script if we decide to selectively run other things on our CI server.</p><p>The last piece of code in our script will be used to build our list of changed module names. So for each <strong>module_name</strong> variable we&#x2019;re going to add it to our <strong>changed_modules</strong> variable, in the end this will result in a single string representing separated module names.</p><p>You may notice that this is all wrapped in an if statement &#x2013; this checks as to whether &#xA0;the changed_modules already contains the current module_name and if so, we don&#x2019;t want to re-add it to our <strong>changed_modules</strong> variable (otherwise we will end up with duplicates!).</p><!--kg-card-begin: html--><pre><code>if [[ ${changed_modules} != *&quot;$module_name&quot;* ]]; then 
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;changed_modules=&quot;${changed_modules} ${module_name}&quot;

fi</code></pre><!--kg-card-end: html--><p>At this point we have a list of module names in a single string. Depending on your CI service, you may need to pass this to another script step. In the case of bitrise, you can write this value to an environment variable to re-use in other script steps for build module specific commands:</p><!--kg-card-begin: html--><pre><code>envman add --key CHANGED_MODULES --value &quot;${changed_modules}&quot;</code></pre><!--kg-card-end: html--><hr><p>From the above operations we&#x2019;re going to now have a string that represents all of the different module names in our application. This might look something like:</p><!--kg-card-begin: html--><pre><code>moduleA moduleB moduleC</code></pre><!--kg-card-end: html--><p>Now that we have a collection of these module names, we need to go ahead and build the test task using those names so that we can run the unit tests for that module. For this we&#x2019;re going to need to take each one of the module names from our string and create the test command for each one. For running our unit tests we&#x2019;re going to want to end up with a string that looks like:</p><!--kg-card-begin: html--><pre><code>:moduleA:testDebugUnitTest :moduleB:testDebugUnitTest ...</code></pre><!--kg-card-end: html--><p>However, as previously mentioned, it might be the case that some module names that we&#x2019;ve acquired aren&#x2019;t actually modules within our application. For example, if you&#x2019;ve edited the build.gradle / gradle.properties files, your buildSrc module or even deleted moduleA from your application, then these will all be contained within your module name string. For this reason, before we build our test command string we need to filter out anything that doesn&#x2019;t support us running unit tests for the module within our application. The code to achieve this looks like so:</p><!--kg-card-begin: html--><pre><code>AVAILABLE_TASKS=$(./gradlew tasks --all)
modules=$CHANGED_MODULES

test_commands=&quot;&quot;

for module in $commands
do 
&#xA0;&#xA0;&#xA0;&#xA0;if [[ $AVAILABLE_TASKS =~ $module&quot;:&quot; ]]; then 
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;test_commands=
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;${test_commands}&quot; :&quot;${module}&quot;:testDebugUnitTest&quot;
&#xA0;&#xA0;&#xA0;&#xA0;fi
done

if [[ $test_commands == &quot;&quot; ]]; then
&#xA0;&#xA0;&#xA0;&#xA0;test_commands=&quot;test&quot;
fi

envman add --key UNIT_TEST_COMMANDS --value &quot;${test_commands}&quot;</code></pre><!--kg-card-end: html--><p>We begin by retrieving all of the available gradle tasks within our application:</p><!--kg-card-begin: html--><pre><code>AVAILABLE_TASKS=$(./gradlew tasks --all)</code></pre><!--kg-card-end: html--><p>Whilst this doesn&#x2019;t provide us with the actual commands for running tests, it does tell us the modules that can have commands run against them. For example:</p><ul><li>If we make changes to the buildSrc module, this has no gradle tasks to run against it that will come back from our tasks command</li></ul><ul><li>A deleted module would not show any gradle commands that can be run for it</li></ul><ul><li>If root files have been edited (root build.gradle, gradle.properties) that are not in a module, these names will not match any modules and their commands</li></ul><p>With the <strong>tasks &#x2013;all</strong> command we can retrieve a collection of modules and check our module names against them. With this collection of commands we can now take our generated module names from the last script and check these agains the available commands. We&#x2019;ll begin by retrieving this module names from the environment variable that we saved them to:</p><!--kg-card-begin: html--><pre><code>modules=$CHANGED_MODULES</code></pre><!--kg-card-end: html--><p>Next we need to check whether our available tasks contains a reference to the modules within our module names string. For this we loop through each of the module names in our string and verify that the module name is supported for our needs:</p><!--kg-card-begin: html--><pre><code>for module in $modules
do 
&#xA0;&#xA0;&#xA0;&#xA0;if [[ $AVAILABLE_TASKS =~ $module&quot;:&quot; ]]; then 
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;test_commands=
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;${test_commands}&quot; :&quot;${module}&quot;:testDebugUnitTest&quot;
&#xA0;&#xA0;&#xA0;&#xA0;fi
done</code></pre><!--kg-card-end: html--><p>If you run the <strong>tasks &#x2013;all</strong> command then you&#x2019;ll see something like the following:</p><!--kg-card-begin: html--><pre><code>moduleA:someTask moduleA: anotherTask moduleB:someTask ...</code></pre><!--kg-card-end: html--><p>And in our script above we have the following line:</p><!--kg-card-begin: html--><pre><code>if [[ $AVAILABLE_TASKS =~ $module&quot;:&quot; ]];</code></pre><!--kg-card-end: html--><p>Here we are taking our module name, appending it with a colon and asserting whether our variable containing the tasks has a reference to this string value. If so, we can presume that the module supports the unit test command that we want to run. If so, then we append our <strong>test_commands</strong> variable with the command to run the unit tests for our module.</p><!--kg-card-begin: html--><pre><code>test_commands=
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;${test_commands}&quot; :&quot;${module}&quot;:testDebugUnitTest&quot;</code></pre><!--kg-card-end: html--><p>If for some reason out test_commands variable is empty, either something has gone wrong or no modules have been changed &#x2013; maybe only the build.gradle file has only been changed in the current PR. Here you can either not run any tests, or have a safeguard in place that will just run all of the tests for the project. &#xA0;This can be done by assigning the &#x201C;test&#x201D; string value to our test_commands variable.</p><!--kg-card-begin: html--><pre><code>if [[ $test_commands == &quot;&quot; ]]; then
&#xA0;&#xA0;&#xA0;&#xA0;test_commands=&quot;test&quot;
fi</code></pre><!--kg-card-end: html--><p>With the above done we should now have a string variable that looks something like so:</p><!--kg-card-begin: html--><pre><code>:moduleA:testDebugUnitTest :moduleB:testDebugUnitTest</code></pre><!--kg-card-end: html--><p>This is great! Now we have &#xA0;a collection of the commands that need to be run for our unit test task. The only thing left to do is to save this to an environment variable so that our CI can use it within the unit test task.</p><!--kg-card-begin: html--><pre><code>envman add --key UNIT_TEST_COMMANDS --value &quot;${test_commands}&quot;</code></pre><!--kg-card-end: html--><hr><p>This next part will really depend on the service you are using for your CI. For us we are using Bitrise, Bitrise provides a Gradle Unit Test step which is used to run the unit tests in a project. This step tasks a Test task input variable &#x2013; this is where we are now gong to pass a reference to our <strong>UNIT_TEST_COMMANDS</strong> variable</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2019/12/Screenshot-2019-12-20-at-12.02.02.png" class="kg-image" alt="Selectively running Android modularized unit tests on your CI server" loading="lazy" width="494" height="194"></figure><p>Now when our unit tests are run, only the unit tests for the changed modules are run. This will help us to shave some times off our builds, allowing us to be more productive and efficient when building our products!</p><hr><p>With all of the above you are able to put something in place that allows you to selectively run unit tests for your modularised android project. Even if the above isn&#x2019;t exactly what you&#x2019;re looking to put in place, you may be able to use some of the module specific scripting for something else within your CI server.</p><p>Are you already using scripting for these kind of things, or looking to put something in place? Feel free to reach out and I&#x2019;ll be happy to chat over any of these things!</p>]]></content:encoded></item><item><title><![CDATA[Tainting and Labeling Kubernetes Nodes to Run Special Workload — A quick guide that is finally NOT confusing]]></title><description><![CDATA[A quick guide to tainting and labeling kubernetes. This post is an overview of tainting and labeling kubernetes nodes to run special workload.]]></description><link>https://buffer.com/resources/tainting-and-labeling-kubernetes-nodes-to-run-special-workload-e2-80-8a--e2-80-8aa-quick-guide-that-is-finally-not-confusing/</link><guid isPermaLink="false">5e991eb04280f300389c6b6b</guid><category><![CDATA[Overflow]]></category><dc:creator><![CDATA[Steven Cheng]]></dc:creator><pubDate>Tue, 03 Dec 2019 23:11:37 GMT</pubDate><media:content url="https://buffer.com/resources/content/images/wp-content/uploads/2019/12/440px-Kubernetes_logo_without_workmark.svg_.png" medium="image"/><content:encoded><![CDATA[<img src="https://buffer.com/resources/content/images/wp-content/uploads/2019/12/440px-Kubernetes_logo_without_workmark.svg_.png" alt="Tainting and Labeling Kubernetes Nodes to Run Special Workload&#x200A;&#x2014;&#x200A;A quick guide that is finally NOT confusing"><p>All right folks, I intend to keep this one short and that&#x2019;s what I will do. I mean, it&#x2019;s supposed to be easy but the official documentation(<a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="noreferrer noopener">1</a>, <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node" rel="noreferrer noopener">2</a>) makes it unnecessarily confusing. So I think maybe I can help to fill in the gap.</p><p>I will be using one of our business requirements at Buffer <a href="https://itnext.io/how-to-set-kubernetes-resource-requests-and-limits-a-saga-to-improve-cluster-stability-and-a7b1800ecff1">in this</a> <a href="https://buffer.com/resources/how-to-set-kubernetes-resource-requests-and-limits-e2-80-8a--e2-80-8aa-saga-to-improve-cluster-stability-and-efficiency/">project</a>, as an example for this blog post.</p><h3 id="quick-recap">Quick recap</h3><p>So, we need a few nodes that are dedicated to running cronjobs, and nothing else. At the same time, we want to make sure the cornjobs are scheduled to these nodes, and nowhere else. This means we need 2 things</p><ul><li>Tainted nodes that don&#x2019;t take other workloads</li><li>The workload that only goes to the destination nodes</li></ul><p>Now, let&#x2019;s start from nodes, then the workload</p><h3 id="nodes">Nodes</h3><p>Since the requirement is broken down to 2 aspects (see above), there are 2 things we will need to specify for nodes. As always, kops is my weapon of choice.</p><p>In kops, you can do this <code>kops edit ig &lt;INSTANCE GROUP IN INTEREST&gt;</code></p><!--kg-card-begin: html--><pre><code>apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
&#xA0;&#xA0;labels:
&#xA0;&#xA0;&#xA0;&#xA0;kops.k8s.io/cluster: steven.k8s.com
&#xA0;&#xA0;name: frequent-cronjob-nodes
spec:
&#xA0;&#xA0;image: kope.io/k8s-1.13-debian-stretch
&#xA0;&#xA0;machineType: m4.xlarge
&#xA0;&#xA0;maxSize: 2
&#xA0;&#xA0;minSize: 2
&#xA0;&#xA0;nodeLabels:
&#xA0;&#xA0;&#xA0;&#xA0;kops.k8s.io/instancegroup: frequent-cronjob-nodes
&#xA0;&#xA0;role: Node
&#xA0;&#xA0;subnets:
&#xA0;&#xA0;- us-east-1b
&#xA0;&#xA0;- us-east-1c
&#xA0;&#xA0;taints:
&#xA0;&#xA0;- dedicated=frequent-cronjob-nodes:NoSchedule</code></pre><!--kg-card-end: html--><h3 id="tainting-nodes">Tainting nodes</h3><p>This prevents other workloads from being scheduled to them. It&#x2019;s achieved by these 2 lines</p><!--kg-card-begin: html--><pre><code>taints: 
- dedicated=frequent-cronjob-nodes:NoSchedule</code></pre><!--kg-card-end: html--><h3 id="labeling-nodes">Labeling nodes</h3><p>This helps a specialized workload to locate the nodes. It&#x2019;s achieved by these 2 lines</p><!--kg-card-begin: html--><pre><code>nodeLabels:&#xA0;&#xA0; 
kops.k8s.io/instancegroup: frequent-cronjob-nodes</code></pre><!--kg-card-end: html--><p>I know there are people who don&#x2019;t use kops out there. If you are one of them, here are 2 commands to help</p><p><code>kubectl taint nodes &lt;NODE IN INTEREST&gt; dedicated=frequent-cronjob-nodes:NoSchedule</code></p><p><code>kubectl label nodes &lt;NODE IN INTEREST&gt; kops.k8s.io/instancegroup=frequent-cronjob-node</code></p><h3 id="workload">Workload</h3><p>Similar to nodes, we will need to do 2 things to the deployment/cronjob yaml file. I&#x2019;m including a complete yaml to save our eyes <a href="https://twitter.com/caged/status/1039937162769096704" rel="noreferrer noopener">from this</a> (yeah, you know what I&#x2019;m talking about).</p><!--kg-card-begin: html--><pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
&#xA0;&#xA0;namespace: dev
&#xA0;&#xA0;name: steven-cron
&#xA0;&#xA0;labels:
&#xA0;&#xA0;&#xA0;&#xA0;app: steven-cron
spec:
&#xA0;&#xA0;schedule: &quot;* * * * *&quot;
&#xA0;&#xA0;jobTemplate:
&#xA0;&#xA0;&#xA0;&#xA0;spec:
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;template:
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;spec:
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;nodeSelector:
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;kops.k8s.io/instancegroup: frequent-cronjob-nodes
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;tolerations:
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;- key: dedicated
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;value: frequent-cronjob-nodes
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;operator: &quot;Equal&quot;
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;effect: NoSchedule
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;containers:
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;- name: steven-cron
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;image: buffer/steven-cron
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;command: [&quot;php&quot;, &quot;./src/Crons/index.php&quot;]
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;imagePullSecrets:
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;- name: buffer</code></pre><!--kg-card-end: html--><h3 id="tolerating-taints">Tolerating taints</h3><p>This makes sure the workload can be scheduled to the tainted nodes. It&#x2019;s achieved by these lines</p><!--kg-card-begin: html--><pre><code>tolerations: 
- key: dedicated&#xA0;&#xA0; 
&#xA0;&#xA0;value: frequent-cronjob-nodes&#xA0;&#xA0; 
&#xA0;&#xA0;operator: &quot;Equal&quot;&#xA0;&#xA0; 
&#xA0;&#xA0;effect: NoSchedule</code></pre><!--kg-card-end: html--><h3 id="specifying-destination-nodes">Specifying destination nodes</h3><p>This makes sure the workload is only to be scheduled to the specified nodes. It&#x2019;s achieved by these 2 lines</p><!--kg-card-begin: html--><pre><code>nodeSelector:&#xA0;&#xA0; 
&#xA0;&#xA0;kops.k8s.io/instancegroup: frequent-cronjob-nodes</code></pre><!--kg-card-end: html--><h3 id="profit">Profit</h3><p>This is it. We can now rest assure the right workload will be going to the right nodes. In this way, we can start building some specialized node groups for specialized workloads, say GPU nodes for machine learning or memory-intensive nodes for local caching.</p><p>I hope this helps in any way. Until next time, please feel free to hit me up on <a href="https://twitter.com/stevenc81" rel="noreferrer noopener">Twitter</a> should you have any questions. ?</p>]]></content:encoded></item><item><title><![CDATA[How to Set Kubernetes Resource Requests and Limits - A Saga to Improve Cluster Stability and Efficiency]]></title><description><![CDATA[Learn how to set kubernetes resource requests and limits. This post explains how to set Kubernetes resource requests and limits.]]></description><link>https://buffer.com/resources/how-to-set-kubernetes-resource-requests-and-limits-e2-80-8a--e2-80-8aa-saga-to-improve-cluster-stability-and-efficiency/</link><guid isPermaLink="false">5e991eb04280f300389c6b6c</guid><category><![CDATA[Workplace of the future]]></category><category><![CDATA[Overflow]]></category><dc:creator><![CDATA[Steven Cheng]]></dc:creator><pubDate>Wed, 13 Nov 2019 19:18:46 GMT</pubDate><media:content url="https://buffer.com/resources/content/images/wp-content/uploads/2019/11/2560px-Kubernetes_logo_without_workmark.svg_.png" medium="image"/><content:encoded><![CDATA[<h3 id="a-mystery">A mystery</h3><img src="https://buffer.com/resources/content/images/wp-content/uploads/2019/11/2560px-Kubernetes_logo_without_workmark.svg_.png" alt="How to Set Kubernetes Resource Requests and Limits&#x200A;-&#x200A;A Saga to Improve Cluster Stability and Efficiency"><p>So, it all started on September 1st, right after our cluster upgrade from 1.11 to 1.12. Almost on the next day, we began to see alerts on <code>kubelet</code> reported by Datadog. On some days we would get a few (3 &#x2013; 5) of them, other days we would get more than 10 in a single day. The alert monitor is based on a Datadog check &#x2013; <code>kubernetes.kubelet.check</code>, and it&#x2019;s triggered whenever the <code>kubelet</code> process is down in a node.</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2019/11/bdca2c2a8545bf665d0933fdb7e67075_Image2019-11-06at12.48.47PM-945x1024.png" class="kg-image" alt="How to Set Kubernetes Resource Requests and Limits&#x200A;-&#x200A;A Saga to Improve Cluster Stability and Efficiency" loading="lazy" width="945" height="1024" srcset="https://buffer.com/resources/content/images/size/w600/wp-content/uploads/2019/11/bdca2c2a8545bf665d0933fdb7e67075_Image2019-11-06at12.48.47PM-945x1024.png 600w, https://buffer.com/resources/content/images/wp-content/uploads/2019/11/bdca2c2a8545bf665d0933fdb7e67075_Image2019-11-06at12.48.47PM-945x1024.png 945w" sizes="(min-width: 720px) 720px"></figure><p>We know <a href="https://kubernetes.io/docs/concepts/overview/components/#kubelet" rel="noreferrer noopener">kubelet</a> plays an important role in Kubernetes scheduling. Not having it running properly in a node would directly remove that node from a functional cluster. Having more nodes with problematic <code>kubelet</code> then we get a cluster degradation. Now, Imagine waking up to 16 alerts in the morning. It was absolutely terrifying.</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2019/11/Image-2019-11-04-at-4.25.37-PM-1024x581.png" class="kg-image" alt="How to Set Kubernetes Resource Requests and Limits&#x200A;-&#x200A;A Saga to Improve Cluster Stability and Efficiency" loading="lazy" width="1024" height="581" srcset="https://buffer.com/resources/content/images/size/w600/wp-content/uploads/2019/11/Image-2019-11-04-at-4.25.37-PM-1024x581.png 600w, https://buffer.com/resources/content/images/size/w1000/wp-content/uploads/2019/11/Image-2019-11-04-at-4.25.37-PM-1024x581.png 1000w, https://buffer.com/resources/content/images/wp-content/uploads/2019/11/Image-2019-11-04-at-4.25.37-PM-1024x581.png 1024w" sizes="(min-width: 720px) 720px"></figure><p>What really puzzled us was all the services running on the problematic nodes seemed to be innocuous. In some cases, there were only a handful of running services, and some high CPU usage right before. It was extremely hard to point the finger on anything when the potential offender might have left the scene, thus leaving no trace for us to diagnose further. Funny enough, there weren&#x2019;t any obvious performance impact such as request latency across our services. This little fact added even more mystery to the whole thing.</p><p>This phenomenon continued to start around the same time every day (5:30AM PT), and usually stopped before noon, except for the weekends. To a point, I felt I could use these Datadog alerts for my alarm clock. Not super fun, and I certainly got some grey hair with this challenge.</p><h3 id="our-investigation">Our investigation</h3><p>From the start, we knew this was going to be a tough investigation that would require a systematic approach. For brevity, I&#x2019;m going to just list out some key experiments we attempted and spare you from the details. As much as they are good investigative steps, I don&#x2019;t believe they are important for this post. Here are what we tried</p><ul><li>We upgraded the cluster from 1.12 to 1.13</li><li>We created some tainted nodes and moved all our cronjobs to them</li><li>We created more tainted nodes and moved most CPU consuming workers to them</li><li>We scaled up the cluster by almost 20%, from 42 nodes to 50 nodes</li><li>We scaled down the cluster again because we didn&#x2019;t see any improvements</li><li>We recycled (delete and recreate) all the nodes that previously reported kubelet issue, only to see new nodes followed suit on the next day</li><li>Just between you and me, I even theorized the Datadog alert might be broken because there wasn&#x2019;t any obvious service performance impact. But I couldn&#x2019;t bring myself to close the case knowing the culprit might still be at large.</li></ul><p>With a stroke of luck and a lot of witch-hunting, this piqued my attention</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2019/11/1-1024x395.png" class="kg-image" alt="How to Set Kubernetes Resource Requests and Limits&#x200A;-&#x200A;A Saga to Improve Cluster Stability and Efficiency" loading="lazy" width="1024" height="395" srcset="https://buffer.com/resources/content/images/size/w600/wp-content/uploads/2019/11/1-1024x395.png 600w, https://buffer.com/resources/content/images/size/w1000/wp-content/uploads/2019/11/1-1024x395.png 1000w, https://buffer.com/resources/content/images/wp-content/uploads/2019/11/1-1024x395.png 1024w" sizes="(min-width: 720px) 720px"></figure><p>We saw 10 <code>buffer-publish</code> pods were scheduled to a single node for around 10 minutes, only to be terminated shortly. At the same time the CPU usage spiked, <code>kubelet</code> cried out, and the pods disappeared from the node in the next few minutes after termination.</p><p>No wonder we could never find anything after alerts. But what were so special about these pods, I thought? The only fact we had was the high CPU usage. Now, let&#x2019;s take a look at the resource requests/limits</p><!--kg-card-begin: html--><pre><code>resources:
&#xA0;&#xA0;limits:
&#xA0;&#xA0;&#xA0;&#xA0;cpu: 1000m
&#xA0;&#xA0;&#xA0;&#xA0;memory: 512Mi
&#xA0;&#xA0;requests:
&#xA0;&#xA0;&#xA0;&#xA0;cpu: 100m
&#xA0;&#xA0;&#xA0;&#xA0;memory: 50Mi</code></pre><!--kg-card-end: html--><blockquote><em>CPU/Memory </em><strong><em>requests</em></strong><em> parameter tells Kubenetes how much resource should be allocated initially</em></blockquote><blockquote><em>CPU/Memory </em><strong><em>limits</em></strong><em> parameter tells Kubenetes the max resource should be given under all circumstances</em></blockquote><p>Here is a <a href="http://blog.kubecost.com/blog/requests-and-limits/" rel="noreferrer noopener">post</a> that does a much better job in explaining this concept. I highly recommend reading it in full. Kudos to the team at <a href="https://kubecost.com/" rel="noreferrer noopener">kubecost</a>!</p><p>Now, back to where we are. The CPU requests/limits ratio is 10, and it should be fine, right? We allocate 0.1 CPU to a pod in the beginning and limit the max usage to 1 CPU. In this way, we have a conservative start while still having some kind of, although arbitrary upper boundary. It almost feels like we are following the best practice!</p><p>Then I thought, this doesn&#x2019;t make any sense at all. When 10 pods are scheduled in a single node the total CPU this parameter would allow for is 10 CPUs, but there aren&#x2019;t 10 CPUs in a <code>m4.xlarge</code> node. What would happen during our peak-hours, say 5:30AM PT when America wakes up? Now I can almost visualize a grim picture of these node killing pods taking all CPU, to a point that even <code>kubelet</code> starts to die off, then the whole node just crash and burn.</p><p>So now, what we can do about it?</p><h3 id="the-remedy">The remedy</h3><p>Obviously the easiest way is to lower the CPU limits so these pods will kill themselves before they kill a node. But this doesn&#x2019;t feel quite right to me. What if they really need that much CPU for normal operations, so throttling ( <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/" rel="noreferrer noopener">more on this</a>) doesn&#x2019;t lead to low performance.</p><p>Okay, how about increasing the CPU requests so these pods are more spread out and don&#x2019;t get scheduled into a single node. That sounds like a better plan, and that was the plan we implemented. Here are the details:</p><h4 id="figure-out-how-much-you-typically-need">Figure out how much you typically need</h4><p>I used the Datadog metric <code>kubernetes.cpu.usage.total</code> over the past week on the max reported value to give me some point of reference</p><p>You could see in general it stays below 200m (0.2 CPU). This tells me it&#x2019;s hard to go wrong with this value for CPU requests.</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2019/11/2-1024x478.png" class="kg-image" alt="How to Set Kubernetes Resource Requests and Limits&#x200A;-&#x200A;A Saga to Improve Cluster Stability and Efficiency" loading="lazy" width="1024" height="478" srcset="https://buffer.com/resources/content/images/size/w600/wp-content/uploads/2019/11/2-1024x478.png 600w, https://buffer.com/resources/content/images/size/w1000/wp-content/uploads/2019/11/2-1024x478.png 1000w, https://buffer.com/resources/content/images/wp-content/uploads/2019/11/2-1024x478.png 1024w" sizes="(min-width: 720px) 720px"></figure><h4 id="put-a-limit-on-it">Put a limit on it</h4><p>Now, this was the tricky part, and like most tricky things in life, there isn&#x2019;t a simple solution. In my experiences, a good start would be 2x of the requests. In this case, it would be 400m (0.4 CPU). After the change, I spent some time eyeballing the service performance metrics to make sure the performance wasn&#x2019;t impacted by CPU throttling. Chances are if it were, I would need to up it to a more reasonable number. This is more of an iterative process until you get right.</p><h4 id="pay-attention-to-the-ratio">Pay attention to the ratio</h4><p>It&#x2019;s key not to have low requests tricking Kubernetes into scheduling all pods into one node, only to exhaust all CPU with incredibly high limits. Ideally, the requests/limits should not be too far away from each other, say within 2x to 5x range. Otherwise, an application is considered to be too spiky, or even has some kind of leaks. If this is the case, it&#x2019;s prudent to get to the bottom of the application footprints.</p><h4 id="review-regularly">Review regularly</h4><p>Applications will undergo changes as long as they are active, so will their footprints. Make sure you have some kind of review process that takes you back to Step 1 (Figure out how much it typically needs). This is the only way to keep things in tip-top shape.</p><h3 id="profit">Profit</h3><p>So, did it work? You bet! There were quite a few services in our cluster with disproportional requests/limits. After I adjusted these heavy-duty services, the cluster runs with more stability Here is how it looks now ?</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2019/11/3-1024x353.png" class="kg-image" alt="How to Set Kubernetes Resource Requests and Limits&#x200A;-&#x200A;A Saga to Improve Cluster Stability and Efficiency" loading="lazy" width="1024" height="353" srcset="https://buffer.com/resources/content/images/size/w600/wp-content/uploads/2019/11/3-1024x353.png 600w, https://buffer.com/resources/content/images/size/w1000/wp-content/uploads/2019/11/3-1024x353.png 1000w, https://buffer.com/resources/content/images/wp-content/uploads/2019/11/3-1024x353.png 1024w" sizes="(min-width: 720px) 720px"></figure><p>Wait! How about efficiency promised in the title? Please note the band has gotten more constricted after the changes. This shows the CPU resource across the cluster is being utilized more uniformly. This subsequently makes scaling up to have a linear effect, which is a lot more effective.</p><h3 id="closing-words">Closing words</h3><p>In contrast with deploying each service on a set of dedicated computing instances, service-oriented architecture allows many services to share a single Kubernetes cluster. Precisely because of this, each service now bears the responsibility of specifying its own resource requirements. And this step is not to be taken lightly. An unstable cluster affects all the residing services, and troubleshooting is often challenging. Admittedly, not all of us are experienced with this kind of new configurations. In the good ol&#x2019; days all we needed was to deploy our one thing on some servers, and scale up/down to our liking. I think this might be why I don&#x2019;t see a lot of discussions around the resource parameters in Kubernetes. Through this post, it&#x2019;s my hope to help a few people out there who are struggling with this new concept (I know I did). More importantly, perhaps learn from someone who has some other techniques. If you have any thoughts on this, please feel free to hit me up on <a href="https://twitter.com/stevenc81" rel="noreferrer noopener">Twitter</a>.</p><hr><p><em>Originally published at </em><a href="https://gist.github.com/stevenc81/086d5ed7435ee66d4ea697e6d4461ca2" rel="noreferrer noopener"><em>http://github.com</em></a><em>.</em></p>]]></content:encoded></item><item><title><![CDATA[Announcing The Buffer Overflow Podcast]]></title><description><![CDATA[So, why start a podcast? This post announces the Buffer Overflow podcast.]]></description><link>https://buffer.com/resources/announcing-the-buffer-overflow-podcast/</link><guid isPermaLink="false">5e991eb04280f300389c6b6d</guid><category><![CDATA[Workplace of the future]]></category><category><![CDATA[Overflow]]></category><dc:creator><![CDATA[Jordan Morgan]]></dc:creator><pubDate>Mon, 07 Oct 2019 18:07:47 GMT</pubDate><media:content url="https://buffer.com/resources/content/images/wp-content/uploads/2019/10/podcastHeader.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://buffer.com/resources/content/images/wp-content/uploads/2019/10/podcastHeader.jpg" alt="Announcing The Buffer Overflow Podcast"><p>Today we&#x2019;re happy to announce our new engineering podcast, The Buffer Overflow Podcast. And, our first episode is now available for streaming!</p><h4 id="how-to-listen">How to Listen</h4><p>The Buffer Overflow Podcast is now available on all major platforms today, so if you open your podcast player of choice and search for us, we should be there! Here are some direct links:</p><ul><li><a href="https://www.google.com/podcasts?feed=aHR0cHM6Ly9hbmNob3IuZm0vcy9lMDE0NmRjL3BvZGNhc3QvcnNz">Google Podcasts</a></li><li><a href="https://open.spotify.com/show/70J74zOzHpUYI1orYWFKXC">Spotify</a></li><li><a href="https://www.breaker.audio/the-buffer-overflow-podcast">Breaker</a></li><li><a href="https://overcast.fm/itunes1480847551/the-buffer-overflow-podcast">Overcast</a></li><li><a href="https://pca.st/wd8zwxd6">Pocketcasts</a></li><li><a href="https://radiopublic.com/the-buffer-overflow-podcast-WwnQVL">RadioPublic</a></li><li><a href="https://anchor.fm/s/e0146dc/podcast/rss">RSS feed</a></li></ul><h4 id="so-why-start-a-podcast">So, Why Start a Podcast?</h4><p>It&#x2019;s built into our D.N.A. here at Buffer to try and be transparent about the things we know, are learning or even have struggled with as engineers. Those are things we want to talk about more openly, and that&#x2019;s one of the goals of our new podcast.</p><p>We plan to talk about topics common to engineering (how we do plan application architecture), to operational things (how do we plan the same features across many platforms) or even remote work topics (how to structure your day).</p><p>To kick things off, Joe and myself &#xA0;will be hosting the beginning episodes as we get things off the ground. Expect to hear from more members across our engineering team soon, though!</p><p>We hope you find some value from our new podcast, and thanks for listening!</p>]]></content:encoded></item><item><title><![CDATA[Getting Buffer Publish ready for Android 10]]></title><description><![CDATA[Learn about the things we did to prepare our app for the Android 10 release. This post explains getting Buffer publish ready for Android 10.]]></description><link>https://buffer.com/resources/getting-buffer-publish-ready-for-android-10/</link><guid isPermaLink="false">5e991eb04280f300389c6b6e</guid><category><![CDATA[Overflow]]></category><dc:creator><![CDATA[Joe Birch]]></dc:creator><pubDate>Wed, 11 Sep 2019 12:45:04 GMT</pubDate><media:content url="https://buffer.com/resources/content/images/wp-content/uploads/2019/09/10-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://buffer.com/resources/content/images/wp-content/uploads/2019/09/10-1.png" alt="Getting Buffer Publish ready for Android 10"><p>Android 10 is officially here! On 3rd September it began rolling out for pixel devices, so we wanted to be sure that our app was ready to serve our users who would have Android 10 installed on their device. When these OS updates come around, as developers we sometimes don&#x2019;t know how many changes we&#x2019;re going to need to make to our applications. When Android Marshmallow was released, permissions changes were quite a big thing for many applications. As we move our way through versions up to Android Pie we&#x2019;ve also seen restrictions in various forms for permissions as a whole, along with other changes to restrictions on the size of data passed through intents and various media focused changes. When it comes to Android 10, there are a collection of changes (<a href="https://developer.android.com/about/versions/10/behavior-changes-all">as outlined on the developer site</a>) that could potentially affect your app &#x2013; these changes also include some enhancements that are also available in this latest release.</p><p>Whilst we we haven&#x2019;t had to make too many changes for our app, in this post we&#x2019;ll share the things we did to prepare our app for the Android 10 release and how we adapted some of the new features into our application.</p><hr><h2 id="updating-versions">Updating versions</h2><p>Other than updating the <strong>targetSdkVersion</strong> of your application to <strong>29</strong>, you&#x2019;re going to want to update a couple of android related dependencies that will improve the release of your application for Android 10.</p><p>To begin with, you may have seen that Android 10 features the ability to enable gesture navigation &#x2013; this removes the software buttons at the bottom of the device, allowing the user to rely on gestural navigation to move through applications. Whilst this is something that is not enabled by default, we wanted to make sure that our app behaved as expected when this gestural mode was enabled. In order to get the best experience with gestural navigation you&#x2019;ll want to update to the latest version of the androidx drawlayout dependency. Yes, this is an alpha version but without this there may be some UX difficulties when it comes to certain interactions. For example, on <strong>1.1.0-alpha02</strong> we experienced some oddities when trying to open the navigation drawer within our app. For this reason you should be sure to use at least <strong>1.1.0-alpha03</strong> if you are targeting sdk 29.</p><p>If you&#x2019;re using the androidx appcompat dependency, then updating this to <strong>1.1.0-Rc01</strong> might be required if you are planning on making use of the sharing improvements, with backwards compatibility, for Android 10. When it comes to the changes we made to sharing improvements, this update was required to make use of the <strong>ShortcutInfoCompat</strong> and <strong>ShortcutManagerCompat</strong> classes.</p><hr><h2 id="gestural-navigation">Gestural Navigation</h2><p>This was a big change coming for Android 10 &#x2013; whilst an optional setting that can be toggled from the system settings, gestural navigation allows the user to navigate through the system, apps and backstack via the user of swipe gestures. With the aim to replace the software buttons located at the bottom of the device, this gives us more screen estate to play with as well as give the option to provide a more streamlined navigation experience.</p><p>After enabling gestural navigation through Settings &gt; Gestures &gt; System Navigation you&#x2019;ll be able to try gestural navigation for yourself. Whilst at first this may just seem like a new way to navigate through your device, it can actually have a big impact on the user experience of applications. Because we can now swipe horizontally from either side of the screen, this can interfere with components within your application.</p><p>Now, this does really depend on the application in question, as some application may not be affected by this change &#x2013; it really depends on the components that make up your project. Before updating to alpha03 of the navigation drawer dependency, within alpha02 we experienced an issue where the navigation drawer would not open correctly. As it was, this would not be releasable as that&#x2019;s where our users changes their currently selected account, which is a core part of our application. If you&#x2019;re unaware of gestural navigation or missed out on testing this, it&#x2019;s worth double checking that you&#x2019;re on the latest version of this library and that the navigation drawer in your app behaves correctly.</p><p>Alongside the navigation drawer, anything component that may usually be intercepted within the bounds of this back-swipe gesture should be tested to ensure that there is no interference here. For example, if we had an edge-to-edge swipe-able view component, this could potentially be affected by gestural navigation. Whilst we didn&#x2019;t have anything else that was affected by the gestural navigation changes, this might not be the same for your application. <a href="https://medium.com/androiddevelopers/gesture-navigation-handling-visual-overlaps-4aed565c134c">Check out this blog post</a> for more information on how to overcome these kind of issues.</p><hr><h2 id="scoped-storage">Scoped Storage</h2><p>As the versions of Android have progressed, we&#x2019;ve often seen changes being made to permissions and/or how we access files on the device. This is an important topic and we can understand these changes being made as our users content needs to be both contained and available by other applications when requested. Scoped Storage has been an interesting topic, right back to when the original beta releases and documentation came out for Android 10. There are more information on these changes <a href="https://developer.android.com/training/data-storage/files/external-scoped">here</a>, but to summarise how media access now looks across the system:</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2019/09/Image-2019-09-11-at-5.29.54-am-1-1024x389.png" class="kg-image" alt="Getting Buffer Publish ready for Android 10" loading="lazy" width="1024" height="389" srcset="https://buffer.com/resources/content/images/size/w600/wp-content/uploads/2019/09/Image-2019-09-11-at-5.29.54-am-1-1024x389.png 600w, https://buffer.com/resources/content/images/size/w1000/wp-content/uploads/2019/09/Image-2019-09-11-at-5.29.54-am-1-1024x389.png 1000w, https://buffer.com/resources/content/images/wp-content/uploads/2019/09/Image-2019-09-11-at-5.29.54-am-1-1024x389.png 1024w" sizes="(min-width: 720px) 720px"></figure><p>Compared to the original specification for scoped storage, what&#x2019;s in place now for Android 10 aimed to provide a way that was both accessible for user and developers. For Publish, we needed to make a couple of changes as we were seeing some errors when trying to handle media from certain locations on the device. In some places we were previously making use of the file path to access files on the device from other apps, and on Android 10 this can cause issues as we may not have permission to access the file. If we are looking to retrieve a file then we should now be accessing it using the content resolver &#x2013; on Android 10 there is now a function available called openFile() to do this.</p><!--kg-card-begin: html--><pre><code>val fileDescriptor = context.contentResolver.openFile(mediaUri, &quot;r&quot;, null)</code></pre><!--kg-card-end: html--><p>Once we have the ParcelFileDescriptor from this call we can use that to access the file, without running into any permission related issues (provided that we have been granted read access to the users storage where required). This method for access files will also need to be taken into account when trying to read Exif data for files &#x2013; so when trying to create a new instance of an ExifInterface this will need to take the file retrieve from the media store using the above approach. The same rules will apply for any approaches involved when trying to deal with media &#x2013; your best bet is to read up on the storage related changes for Android 10 and see how they might affect your app.</p><hr><h2 id="settings-panels">Settings Panels</h2><p>This feature allows users to access key settings for the device (related to connectivity and audio) so that they can easily change common settings without leaving the context of your application. This functionality is only available on Android 10, so is not backward compatible with older versions of the Android OS. A lot of applications make use of networking features, so the connectivity side of things is something that these apps will be able to (and should really) make use of.</p><p>Let&#x2019;s take a look at one example of where we make use of this. Within the Buffer Publish app we load content from our API into the social queue of an account. If this request fails, we show an error view that allows the user to retry the request. At this point, the retry button can be pressed and we&#x2019;ll attempt to reload the content. However, if there is a connectivity issue (maybe the user has airplane mode turned on, or is not connected to WiFi with their data connection disabled) then this retry button will send the user into an endless loop of failed retries. We decided to make use of settings panels here so that if there is no connection available then instead the button will instead launch a connectivity settings panel to allow the user to change their connectivity settings from within our application:</p><!--kg-card-begin: html--><pre><code>activity.startActivityForResult(
&#xA0;&#xA0;&#xA0;&#xA0;Intent(Settings.Panel.ACTION_INTERNET_CONNECTIVITY), 
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;REQUEST_CODE_SETTINGS_PANEL)</code></pre><!--kg-card-end: html--><p>We use <strong>startActivityForResult</strong> here so that we can detect when the user has returned from this event using <strong>REQUEST_CODE_SETTINGS_PANEL</strong>. Ideally here you would register a connectivity listener and change the state of the error view based on events coming from there. However, to keep lean and manage our priorities right now we decided to just presume that the user had changed their settings and switch the button to present &#x201C;Retry&#x201D; so that they can reload the content. If that then fails again, we go back to step one based on whether or not there is a connection available. This approach works well and allowed us to get this enhancement in place &#x2013; there&#x2019;s definitely room for improvement there in future if we see that the settings panel is commonly interacted with.</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2019/09/Sep-11-2019-13-43-39.gif" class="kg-image" alt="Getting Buffer Publish ready for Android 10" loading="lazy"></figure><hr><h2 id="sharing-improvements">Sharing Improvements</h2><p>Android 10 sees some improvements being made to the share sheet functionality, as outlined in the release notes. This is a huge improvement for users, as previously sharing routes would take some time to load &#x2013; causing a lag effect when trying to share something using the system share sheet. With this new approach, share targets are shared in advance so that the system has them available for display instantly &#x2013; the cool thing is that this approach uses the existing app shortcuts API to provide this functionality.</p><p>We&#x2019;ve made use of sharing shortcuts so that the share sheet will show individual social accounts from your Buffer account &#x2013; that way, our users can share content directly to the composer with that social account selected:</p><!--kg-card-begin: html--><pre><code>ShortcutInfoCompat.Builder(context, profile.id)
&#xA0;&#xA0;&#xA0;&#xA0;.setShortLabel(profile.formattedUsername)
&#xA0;&#xA0;&#xA0;&#xA0;.setIcon(IconCompat.createWithBitmap(bitmap))
&#xA0;&#xA0;&#xA0;&#xA0;.setCategories(shareCategories)
&#xA0;&#xA0;&#xA0;&#xA0;.setIntents(arrayOf(intent
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;.setAction(ACTION_MAIN)
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TASK or Intent.FLAG_ACTIVITY_NEW_TASK)))
&#xA0;&#xA0;&#xA0;&#xA0;.setPerson(Person.Builder()
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;.setKey(profile.id)
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;.setName(profile.formattedUsername)
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;.setIcon(IconCompat.createWithBitmap(bitmap))
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;.build())
&#xA0;&#xA0;&#xA0;&#xA0;.build()</code></pre><!--kg-card-end: html--><p>You may notice the setCategories(shareCategories) call in the builder. This is setting the share category for the shortcut, as per the documentation, we the have this shortcut defined within our shortcuts.xml file</p><!--kg-card-begin: html--><pre><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt;
&lt;shortcuts xmlns:android=&quot;http://schemas.android.com/apk/res/android&quot;&gt;
&#xA0;&#xA0;&#xA0;&#xA0;&lt;share-target android:targetClass=&quot;org.buffer.android.composer.ComposerActivity&quot;&gt;
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&lt;data android:mimeType=&quot;text/plain&quot; /&gt;
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&lt;data android:mimeType=&quot;image/*&quot; /&gt;
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&lt;data android:mimeType=&quot;video/*&quot; /&gt;
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&lt;category android:name=&quot;org.buffer.android.category.COMPOSER_SHARE_TARGET&quot; /&gt;
&#xA0;&#xA0;&#xA0;&#xA0;&lt;/share-target&gt;
&lt;/shortcuts&gt;</code></pre><!--kg-card-end: html--><p>Because we have the intent set for the shortcut, this will launch the home screen for that account if the app shortcut is interacted with. On the other hand, the category declaration will handle the case where the shortcut is triggered from the share sheet.</p><p>With this approach above we can now provide a more streamlined (and frictionless) sharing experience into our application throughout the system.</p><p><strong>Note:</strong> If you&#x2019;re looking to use the Compat classes for sharing shortcuts, then you&#x2019;ll need to be on at least version <strong>1.1.0-Rc01</strong> of the androidx compat dependency</p><hr><h2 id="biometric-prompt-improvements">Biometric prompt improvements</h2><p>We never previously had biometric login from our application, mainly because it wasn&#x2019;t a priority for us to implement. With Android 10 containing a few more <a href="https://developer.android.com/about/versions/10/features#improved-biometric-auth">improvements to biometric prompts</a>, we decided to add some functionality to our application to take advantage of what the system has to offer here.</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2019/09/device-2019-09-11-133733-239x300.png" class="kg-image" alt="Getting Buffer Publish ready for Android 10" loading="lazy" width="239" height="300"></figure><p>From within the settings of our app, the user can now enable a setting to require fingerprint authentication when the app is opened (provided that they are logged in). Then when the app is launched and this setting is enabled, we make use of the BiometricPrompt class (available from API level 28) to display this biometric prompt to the user.</p><!--kg-card-begin: html--><pre><code>BiometricPrompt.Builder(context)
&#xA0;&#xA0;&#xA0;&#xA0;.setTitle(title)
&#xA0;&#xA0;&#xA0;&#xA0;.setSubtitle(subtitle)
&#xA0;&#xA0;&#xA0;&#xA0;.setDescription(description)
&#xA0;&#xA0;&#xA0;&#xA0;.setNegativeButton(negativeButtonText, context.mainExecutor, onClickListener)
&#xA0;&#xA0;&#xA0;&#xA0;.build()
&#xA0;&#xA0;&#xA0;&#xA0;.authenticate(cancellationSignal, context.mainExecutor, callback)</code></pre><!--kg-card-end: html--><p>The main change here is that behind the scenes the <a href="https://developer.android.com/reference/android/hardware/biometrics/BiometricManager.html">BiometricManager</a> from API level 29 is used. This class provides us with a more convenient way to check the biometric capabilities of the device, as well as the ability to provide a fallback route (PIN, pattern etc) incase the user cannot use biometric authentication.</p><hr><p>As you can see from this article, we didn&#x2019;t have to make <em>too</em> many changes in-order to get our app ready for our users who are running Android 10. We found that whilst these changes were small, there was a lot of manual / automated testing that needed to take place to ensure that our app functioned as intended for this OS upgrade. As time goes on, we may look at adding some of the other features / enhancements that Android 10 has introduced &#x2013; be it for current or future feature implementations.</p><p>Is your app ready for Android 10? We&#x2019;d love to hear about the things you&#x2019;ve put in place for this release, along with any questions that you may have during that process!</p>]]></content:encoded></item><item><title><![CDATA[Library module navigation in Android Applications]]></title><description><![CDATA[Learn about the library module navigation in Android apps. This post is an overview of the library module navigation in Android applications.]]></description><link>https://buffer.com/resources/library-module-navigation-in-android-applications/</link><guid isPermaLink="false">5e991eb04280f300389c6b6f</guid><category><![CDATA[Overflow]]></category><dc:creator><![CDATA[Joe Birch]]></dc:creator><pubDate>Tue, 30 Jul 2019 11:57:23 GMT</pubDate><media:content url="https://buffer.com/resources/content/images/wp-content/uploads/2019/07/alexander-andrews-4JdvOwrVzfY-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: html--><img src="https://buffer.com/resources/content/images/wp-content/uploads/2019/07/alexander-andrews-4JdvOwrVzfY-unsplash.jpg" alt="Library module navigation in Android Applications"><p style="text-align: center;">Header Photo by&#xA0;<a href="https://unsplash.com/@alex_andrews?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Alexander Andrews</a>&#xA0;on&#xA0;<a href="https://unsplash.com/search/photos/navigation?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p><!--kg-card-end: html--><p>When it comes to building android applications, there&#x2019;s no doubt that we&#x2019;ll need to include some form of navigation to move between the different parts of our app. With modularisation becoming more and more popular when it comes to android development, navigation becomes a big part of this process.</p><p>At Buffer we&#x2019;ve begun creating a lot of shared code between our applications &#x2013; some of these are utilities, widgets and even features (note, these are <strong>not</strong> yet dynamic feature modules, instead they are library modules). When it comes to features, often these will need to navigate to another part of the app &#x2013; however, because these library modules are not aware of the base android app module they are unable to satisfy the navigational requirements in our app.</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2019/07/flow.png" class="kg-image" alt="Library module navigation in Android Applications" loading="lazy" width="2000" height="470" srcset="https://buffer.com/resources/content/images/size/w600/wp-content/uploads/2019/07/flow.png 600w, https://buffer.com/resources/content/images/size/w1000/wp-content/uploads/2019/07/flow.png 1000w, https://buffer.com/resources/content/images/size/w1600/wp-content/uploads/2019/07/flow.png 1600w, https://buffer.com/resources/content/images/wp-content/uploads/2019/07/flow.png 2048w" sizes="(min-width: 720px) 720px"></figure><p>In this post we are going to explore a solution that not only solves this problem for us, but helps us to separate the navigational concern from the rest of our application, providing us with numerous benefits in the process.</p><hr><p>In the example we can see that we can&#x2019;t easily navigate to some of the screens in our application, purely because we do not have a reference to them. Whilst we are forced to solve this initial problem here, this gets us thinking about navigation in general within our app. With single module applications we would normally handle the navigation by calling <code>startActivity()</code> , followed by passing in an Intent that has a direct reference to the Activity class which we were navigating to. However, this approach brings the question of whether or not our activities should have the knowledge of where it is they are navigating to? If anything, this leaks another concern &amp; responsibility into the activity. As well as this, the class that is launching that activity now has a direct reference to it, which adds to the concepts that our launching class is aware of and being directly tied to that destination &#x2013; which is something that a library (in most cases) cannot do. Finally, when it comes to the testing of our class, navigation also becomes a concern of these tests too &#x2013; which reinforces the argument of there being another responsibility that is a part of our activity.</p><p>As it is becoming more common now to introduce modularisation to the mix, this is likely to become a common problem faced amongst applications. Because of this and the above issues, it may make sense in some cases to split out the navigation of our app to be handled by some classes outside of the ones which may be currently performing the navigation. Not only does this make navigation possible from these internal library modules, but it helps to separate the concerns of our navigation and it makes it far easier to test the implementations of navigation within our apps.</p><p>With all of this in mind, how can we achieve the above when it comes to navigation within our android apps? Let&#x2019;s begin by taking a look at an approach that meets all of the above requirements.</p><hr><p>Because we&#x2019;re focusing in this post on library modules, let us begin by taking a look at a simple library module we have within our applications. Both of our apps (Publish and Reply) share the same on-boarding screens and because of this, we make use of a shared library that we import as a gradle dependency. This dependency shows a couple of on-boarding steps that the user can swipe through, but the important part here are the two buttons that are displayed to the user. These buttons allow the user to either Navigate to the Sign-Up or Sign-In screens &#x2013; these are not part of the on-boarding library as currently these following screens are verify different for each of the apps. As previously mentioned in this post, the on-boarding library does not have (and cannot have) a reference to the base app module due to it being a library module. So as it is, it cannot navigate to the activities inside of the base app module &#x2013; and because it is being reused for multiple apps, the paths to the desired activities will be completely different so this is not something that should be hard coded.</p><p>With this in mind, the Onboarding library is going to need to provide an interface which states the navigational requirements that are going to need to be satisfied. This allows the onboarding module to define and enforce these requirements without having any knowledge around the actual details of these actions.</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2019/07/onb-1024x724.png" class="kg-image" alt="Library module navigation in Android Applications" loading="lazy" width="1024" height="724" srcset="https://buffer.com/resources/content/images/size/w600/wp-content/uploads/2019/07/onb-1024x724.png 600w, https://buffer.com/resources/content/images/size/w1000/wp-content/uploads/2019/07/onb-1024x724.png 1000w, https://buffer.com/resources/content/images/wp-content/uploads/2019/07/onb-1024x724.png 1024w" sizes="(min-width: 720px) 720px"></figure><p>From here, the application module using that onboarding library can implement the interface and satisfy the navigational requirements. So for example, the activity launching the screens of the onboarding library may implement that interface and when the methods are triggered, handle the navigation around sign-up and sign-in.</p><p>Whilst in the process of implementing this however, it got us thinking about the responsibility of navigation. Our activities and fragments are already handling other things, we could make an improvement here by removing this responsibility from these components and handling the navigation elsewhere.</p><p>For this reason, we decided to introduce a Navigation module. The purpose of this module is to encapsulate all of the navigational logic of the application &#x2013; allowing us to remove this knowledge from our activities / fragments and make it far easier to test the navigational aspects of our app.</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2019/07/pppp-1024x958.png" class="kg-image" alt="Library module navigation in Android Applications" loading="lazy" width="1024" height="958" srcset="https://buffer.com/resources/content/images/size/w600/wp-content/uploads/2019/07/pppp-1024x958.png 600w, https://buffer.com/resources/content/images/size/w1000/wp-content/uploads/2019/07/pppp-1024x958.png 1000w, https://buffer.com/resources/content/images/wp-content/uploads/2019/07/pppp-1024x958.png 1024w" sizes="(min-width: 720px) 720px"></figure><p>With this approach, the Navigation module needs to have a reference to the modules that contain the interfaces defining the required navigation. Whilst this module may end up having a reference to multiple modules, this is fine as its not intended to be reusable and is also fulfilling its purpose &#x2013; it also removes the need to have these dependency references from within our app module itself (in most cases).</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2019/07/two-2-1024x491.png" class="kg-image" alt="Library module navigation in Android Applications" loading="lazy" width="1024" height="491" srcset="https://buffer.com/resources/content/images/size/w600/wp-content/uploads/2019/07/two-2-1024x491.png 600w, https://buffer.com/resources/content/images/size/w1000/wp-content/uploads/2019/07/two-2-1024x491.png 1000w, https://buffer.com/resources/content/images/wp-content/uploads/2019/07/two-2-1024x491.png 1024w" sizes="(min-width: 720px) 720px"></figure><p>Whilst the Navigation module implements the navigation, the Publish module here still needs a reference to navigation for two reasons. First of all, we don&#x2019;t have Navigation handling its own dependency injection &#x2013; navigation becomes a part of our Dagger Component and Module that we have configured within our Publish module. This allows us to provide the required injections inside of the onboarding library &#x2013; whilst this isn&#x2019;t ideal having this injection requirement here, seeing as it is an internal library this solution works well for us at this point in time.</p><p>Another reason why the Publish module needs a reference to this Navigation module is to account for other navigation requirements throughout the app. Moving forward we will not be restricting this to library modules only &#x2013; as we move to decouple more features within our application, these will use navigation interfaces within their corresponding packages (even if not yet modularised) &#x2013; so having this reference to the navigation module helps us to achieve that.</p><hr><p>When it comes to this Onboarding module, as previously mentioned, it will define an interface (lets call it the OnboardingNavigator) that defines the the required navigation. This might look a little something like this:</p><!--kg-card-begin: html--><pre><code>interface OnboardingNavigator {

&#xA0;&#xA0;&#xA0;&#xA0;fun showSignUpForm(activity: Activity)

&#xA0;&#xA0;&#xA0;&#xA0;fun showSignInForm(activity: Activity)
}</code></pre><!--kg-card-end: html--><p>We pass in the activity here so that the onboarding module can completely handle the navigation to the next destination. We do not want to navigate here using string declarations of activity paths, so some form of context is required here.</p><p>Within our Navigation module we&#x2019;re going to want to provide an implementation of this interface &#x2013; this allows us to implement the required functions and launch the required activities when those functions are triggered.</p><!--kg-card-begin: html--><pre><code>class OnboardingCoordinator @Inject constructor() : OnboardingNavigator {

&#xA0;&#xA0;&#xA0;&#xA0;override fun navigateToSignUp(activity: Activity) {
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;activity.startActivity(...)
&#xA0;&#xA0;&#xA0;&#xA0;}

&#xA0;&#xA0;&#xA0;&#xA0;override fun navigateToSignIn(activity: Activity) {
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;activity.startActivity(...)
&#xA0;&#xA0;&#xA0;&#xA0;}
}</code></pre><!--kg-card-end: html--><p>Here we actually use the path of the activity to satisfy the intent. This way we do not need a reference to the base app module of our project within the Navigation module, the same goes for the other destinations that we might navigate to.</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2019/07/four-1024x484.png" class="kg-image" alt="Library module navigation in Android Applications" loading="lazy" width="1024" height="484" srcset="https://buffer.com/resources/content/images/size/w600/wp-content/uploads/2019/07/four-1024x484.png 600w, https://buffer.com/resources/content/images/size/w1000/wp-content/uploads/2019/07/four-1024x484.png 1000w, https://buffer.com/resources/content/images/wp-content/uploads/2019/07/four-1024x484.png 1024w" sizes="(min-width: 720px) 720px"></figure><p>Then, when it comes to the library module, we can initialise a new instance of our coordinator inside of where it is used. If you are using dependency injection, you can inject this into the library module by injecting the interface type and then access the functions of that interface as desired:</p><!--kg-card-begin: html--><pre><code>@Inject lateinit var onboardingCoordinator: OnboardingNavigator</code></pre><!--kg-card-end: html--><p>Now that we have access to the OnboardingNavigator reference we can make use of it to trigger the required navigation. So for example, here we have a click listener set on one of the buttons within the onboarding screens &#x2013; when that button is clicked we can call the corresponding interface function.</p><!--kg-card-begin: html--><pre><code>button_new_user.setOnClickListener { onboardingCoordinator.navigateToSignUp(this) }</code></pre><!--kg-card-end: html--><p>When it comes to these library modules in our applications, we will generally have the kind of structure that is stated below. This outlines the dependencies that will occur between the Navigation module and the specified feature Library module.</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2019/07/five.png" class="kg-image" alt="Library module navigation in Android Applications" loading="lazy" width="2000" height="393" srcset="https://buffer.com/resources/content/images/size/w600/wp-content/uploads/2019/07/five.png 600w, https://buffer.com/resources/content/images/size/w1000/wp-content/uploads/2019/07/five.png 1000w, https://buffer.com/resources/content/images/size/w1600/wp-content/uploads/2019/07/five.png 1600w, https://buffer.com/resources/content/images/size/w2400/wp-content/uploads/2019/07/five.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>But for any feature library module that we have implemented, we&#x2019;ll end with a similar approach regardless of what the feature is. With that in mind, we can summarise that the navigation for these modules will take on a general structure when it comes to handling navigation.</p><hr><p>To conclude the above approach to handling navigation when working with android library modules, overall we end up with something that looks like this:</p><figure class="kg-card kg-image-card"><img src="https://buffer.com/resources/content/images/wp-content/uploads/2019/07/six-1024x672.png" class="kg-image" alt="Library module navigation in Android Applications" loading="lazy" width="1024" height="672" srcset="https://buffer.com/resources/content/images/size/w600/wp-content/uploads/2019/07/six-1024x672.png 600w, https://buffer.com/resources/content/images/size/w1000/wp-content/uploads/2019/07/six-1024x672.png 1000w, https://buffer.com/resources/content/images/wp-content/uploads/2019/07/six-1024x672.png 1024w" sizes="(min-width: 720px) 720px"></figure><p>In this diagram it may look like there is a lot going on, but some of these boxes just represent the functions that we define within the interfaces, along with the implementations. We can see here the different steps of this article pieced together into a complete solution. To summarise this we end up with:</p><ul><li>A <strong>library module</strong> that defines the navigational requirements in the form of an interface</li><li>A <strong>navigation module</strong> that is responsible for implementing the navigation that is defined within that library module</li><li>An <strong>app </strong>module which configures this <strong>navigation module</strong> for dependency injection and also to provide navigation should any other parts of our project need it (this part is subject to your project structure)</li></ul><p>With the three core concepts in mind, we can see that we now have a clear separation of responsibilities when it comes to navigation, keeping our classes more lightweight and focused. We also see the benefits when it comes to testing too, now our tests for our activities / fragments no longer contain checks for specific intents being launched &#x2013; however, we can still test these behaviours for our coordinator classes. For example, we could write a small test for our coordinator to ensure that the correct navigation remains in place:</p><!--kg-card-begin: html--><pre><code>@Test
fun showSignUpFormNavigatesToEmailConnectForSignUp() {
&#xA0;&#xA0;&#xA0;&#xA0;main.launchActivity(null)
&#xA0;&#xA0;&#xA0;&#xA0;Intents.intending(IntentMatchers.anyIntent())
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;.respondWith(Instrumentation.ActivityResult(Activity.RESULT_OK, Intent()))

&#xA0;&#xA0;&#xA0;&#xA0;onboardingCoordinator.showSignUpForm(main.activity)
&#xA0;&#xA0;&#xA0;&#xA0;intended(allOf(hasComponent(Activities.EmailConnect.className),
&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;hasExtra(EXTRA_CONNECT_TYPE, 0)))
&#xA0;&#xA0;&#xA0;&#xA0;Intents.release()
}</code></pre><!--kg-card-end: html--><p>We can also do a similar thing from UI tests. Whilst we are not testing here to check whether a specific activity is launched, we can still verify that the navigator instance was interacted with as expected:</p><!--kg-card-begin: html--><pre><code>verify(onboardingNavigator).showSignUpForm(...)</code></pre><!--kg-card-end: html--><p>As we move forward with features we will take the same approach that has been taken above to satisfy navigation throughout our app. Even for features that are not yet modularised (or able to be modularised, for example due to tight coupling) we will be able to take a similar approach by using the navigation module whilst experiencing all of the advantages listed above.</p><p>At the same time it&#x2019;s important to note that this may not be applicable to all applications, before putting this in place (like any technical approach) it&#x2019;s important to ask if you <strong>need</strong> this. When it comes to library modules where we cannot access specific classes it feels appropriate as we need some form of interface in place anyway. But for the case of your applications without this modules in place, yes it&#x2019;s a benefit but is not the be all and end all.</p>]]></content:encoded></item><item><title><![CDATA[Video: Getting started with Go [Bufferdevs Snackchat]]]></title><description><![CDATA[In this Snackchat, Joe Birch gives us a quick introduction to Go. This post shares a video on getting started with Go.]]></description><link>https://buffer.com/resources/video-getting-started-with-go-bufferdevs-snackchat/</link><guid isPermaLink="false">5e991eb04280f300389c6b70</guid><category><![CDATA[Workplace of the future]]></category><category><![CDATA[Overflow]]></category><dc:creator><![CDATA[Joe Birch]]></dc:creator><pubDate>Fri, 07 Jun 2019 13:27:54 GMT</pubDate><content:encoded><![CDATA[<figure class="kg-card kg-embed-card"><iframe width="640" height="360" src="https://www.youtube.com/embed/j7OCVQD97WE?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><p>At Buffer we regularly hold what we call &#x2018;Snackchats&#x2019;. These are short &amp; informal presentations of something that we want to share with our team which help to build on our engineering culture and help each other to grow as engineers. Anyone on the team has the opportunity to give these talks</p><p>Once a discussion has been proposed, a day / time can be picked ready for people to grab a drink (and snack!) to take some time out of their day to learn something new.</p><p>These have been happening at Buffer for some time. However, we have now decided that we&#x2019;re going to start sharing these outside of our place of work. This ties in with our value of Transparency and allows us to share our learnings with even more people. So grab your drink of choice, your favourite snack and let&#x2019;s learn something new together</p><p>In this Snackchat, Joe Birch gives us a quick introduction to Go. We learnt what Go is, some of the features it comes with and took a quick tour of some basic language features to get us started.</p>]]></content:encoded></item></channel></rss>