Archiwum kategorii: CSS

Musings on HTTP/2 and Bundling

Post pobrano z: Musings on HTTP/2 and Bundling

HTTP/2 has been one of my areas of interest. In fact, I’ve written a few articles about it just in the last year. In one of those articles I made this unchecked assertion:

If the user is on HTTP/2: You’ll serve more and smaller assets. You’ll avoid stuff like image sprites, inlined CSS, and scripts, and concatenated style sheets and scripts.

I wasn’t the only one to say this, though, in all fairness to Rachel, she qualifies her assertion with caveats in her article. To be fair, it’s not bad advice in theory. HTTP/2’s multiplexing ability gives us leeway to avoid bundling without suffering the ill effects of head-of-line blocking (something we’re painfully familiar with in HTTP/1 environments). Unraveling some of these HTTP/1-specific optimizations can make development easier, too. In a time when web development seems more complicated than ever, who wouldn’t appreciate a little more simplicity?

As with anything that seems simple in theory, putting something into practice can be a messy affair. As time has progressed, I’ve received great feedback from thoughtful readers on this subject that has made me re-think my unchecked assertions on what practices make the most sense for HTTP/2 environments.

The case against bundling

The debate over unbundling assets for HTTP/2 centers primarily around caching. The premise is if you serve more (and smaller) assets instead of a giant bundle, caching efficiency for return users with primed caches will be better. Makes sense. If one small asset changes and the cache entry for it is invalidated, it will be downloaded again on the next visit. However, if only one tiny part of a bundle changes, the entire giant bundle has to be downloaded again. Not exactly optimal.

Why unbundling could be suboptimal

There are times when unraveling bundles makes sense. For instance, code splitting promotes smaller and more numerous assets that are loaded only for specific parts of a site/app. This makes perfect sense. Rather than loading your site’s entire JS bundle up front, you chunk it out into smaller pieces that you load on demand. This keeps the payloads of individual pages low. It also minimizes parsing time. This is good, because excessive parsing can make for a janky and unpleasant experience as a page paints and becomes interactive, but has not yet not fully loaded.

But there’s a drawback to this we sometimes miss when we split assets too finely: Compression ratios. Generally speaking, smaller assets don’t compress as well as larger ones. In fact, if some assets are too small, some server configurations will avoid compressing them altogether, as there are no practical gains to be made. Let’s look at how well some popular JavaScript libraries compress:

Filename Uncompressed Size Gzip (Ratio %) Brotli (Ratio %)
jquery-ui-1.12.1.min.js 247.72 KB 66.47 KB (26.83%) 55.8 KB (22.53%)
angular-1.6.4.min.js 163.21 KB 57.13 KB (35%) 49.99 KB (30.63%)
react-0.14.3.min.js 118.44 KB 30.62 KB (25.85%) 25.1 KB (21.19%
jquery-3.2.1.min.js 84.63 KB 29.49 KB (34.85%) 26.63 KB (31.45%)
vue-2.3.3.min.js 77.16 KB 28.18 KB (36.52%)
zepto-1.2.0.min.js 25.77 KB 9.57 KB (37.14%)
preact-8.1.0.min.js 7.92 KB 3.31 KB (41.79%) 3.01 KB (38.01%)
rlite-2.0.1.min.js 1.07 KB 0.59 KB (55.14%) 0.5 KB (46.73%)

Sure, this comparison table is overkill, but it illustrates a key point: Large files, as a rule of thumb, tend to yield higher compression ratios than smaller ones. When you split a large bundle into teeny tiny chunks, you won’t get as much benefit from compression.

Of course, there’s more to performance than asset size. In the case of JavaScript, we may want to tip our hand toward smaller page/template-specific files because the initial load of a specific page will be more streamlined with regard to both file size and parse time. Even if those smaller assets don’t compress as well individually. Personally, that would be my inclination if I were building an app. On traditional, synchronous „site”-like experiences, I’m not as inclined to pursue code-splitting.

Yet, there’s more to consider than JavaScript. Take SVG sprites, for example. Where these assets are concerned, bundling appears more sensible. Especially for large sprite sets. I performed a basic test on a very large icon set of 223 icons. In one test, I served a sprited version of the icon set. In the other, I served each icon as individual assets. In the test with the SVG sprite, the total size of the icon set represents just under 10 KB of compressed data. In the test with the unbundled assets, the total size of the same icon set was 115 KB of compressed data. Even with multiplexing, there’s simply no way 115 KB can be served faster than 10 KB on any given connection. The compression doesn’t go far enough on the individualized icons to make up the difference. Technical aside: The SVG images were optimized by SVGO in both tests.

Side note: One astute commenter has pointed out that Firefox dev tools show that in the unsprited test, approximately 38 KB of data was transferred. That could affect how you optimize. Just something to keep in mind.

Browsers that don’t support HTTP/2

Yep, this is a thing. Opera Mini in particular seems to be a holdout in this regard, and depending on your users, this may not be an audience segment to ignore. While around 80% of people globally surf with browsers that can support HTTP/2, that number declines in some corners of the world. Shy of 50% of all users in India, for example, use a browser that can communicate to HTTP/2 servers (according to caniuse, anyway). This is at least the picture for now, and support is trending upward, but we’re a long ways from ubiquitous support for the protocol in browsers.

What happens when a user talks to an HTTP/2 server with a browser that doesn’t support it? The server falls back to HTTP/1. This means you’re back to the old paradigms of performance optimization. So again, do your homework. Check your analytics and see where your users are coming from. Better yet, leverage‚s ability to analyze your analytics and see what your audience supports.

The reality check

Would any sane developer architect their front end code to load 223 separate SVG images? I hope not, but nothing really surprises me anymore. In all but the most complex and feature-rich applications, you’d be hard-pressed to find so much iconography. But, it could make more sense for you to coalesce those icons in a sprite and load it up front and reap the benefits of faster rendering on subsequent page navigations.

Which leads me to the inevitable conclusion: In the nooks and crannies of the web performance discipline there are no simple answers, except „do your research”. Rely on analytics to decide if bundling is a good idea for your HTTP/2-driven site. Do you have a lot of users that only go to one or two pages and leave? Maybe don’t waste your time bundling stuff. Do your users navigate deeply throughout your site and spend significant time there? Maybe bundle.

This much is clear to me: If you move your HTTP/1-optimized site to an HTTP/2 host and change nothing in your client-side architecture, it’s not going to be a big deal. So don’t trust blanket statements some web developer writing blog posts (i.e., me). Figure out how your users behave, what optimizations makes the best sense for your situation, and adjust your code accordingly. Good luck!

Cover of Web Performance in Action

Jeremy Wagner is the author of Web Performance in Action, an upcoming title from Manning Publications. Use coupon code sswagner to save 42%.

Check him out on Twitter: @malchata

Musings on HTTP/2 and Bundling is a post from CSS-Tricks

Did CSS get more complicated since the late nineties?

Post pobrano z: Did CSS get more complicated since the late nineties?

Hidde de Vries gathers some of the early thinking about CSS:

There is quite a bit of information on the web about how CSS was designed. Keeping it simple was a core principle. It continued to be — from the early days and the first implementations in the late nineties until current developments now.

The four main design principles listed are fascinating:

  • Authors can specify as much or little as they want
  • It is not a programming language by design
  • They are agnostic as to which medium they are used for
  • It is stream-based

So… did it?

I think lots has changed since the early nineties, but not really things that touch on how we apply CSS to structured markup.

Direct Link to ArticlePermalink

Did CSS get more complicated since the late nineties? is a post from CSS-Tricks

Let’s say you wanna open source a little thing…

Post pobrano z: Let’s say you wanna open source a little thing…

Let’s say you’ve written a super handy little bit of JavaScript. Nice! Well done, you. Surely, the world can benefit from this. A handful of people, at least. No need to keep this locked up. You’ve benefitted from open source tremendously in your career. This is the perfect opportunity to give back!

Let’s do this.

You’re going to need to chuck it into a GitHub repo. That’s like table stakes for open source. This is where people can find it, link to it, see the code, and all that. It’s a place you can push changes to if you need to.

You’ll need to pick a license for it. If the point of this is „giving back” you really do need to, otherwise, it’s essentially like you have the exclusive copyright of it. It’s somewhat counter-intuitive, but picking a license opens up usage, rather than tightening it.

You’ll need to put a README in there. As amazingly self-documenting you think your code is, it isn’t. You’ll need some plain-language writing in there to explain what your thing is and does. Usage samples are vital.

You’ll probably wanna chuck some demos in there, too. Maybe a whole `/demos/` directory so the proof can be in the pudding.

While you’ve made some demos, you might as well puts some tests in there. Those go hand-in-hand. Tests give people who might use your thing some peace of mind that it’s going to work and that as an author you care about making sure it does. If you plan to keep your thing updated, tests will ensure you don’t break things as you make changes.

Speaking of people using your thing… just how are they going to do that? You probably can’t just leave a raw function theThing () in `the-thing.js`! This isn’t the 1800’s, they’ll tell you. You didn’t even wrap an IIFE around it?! You should have at least made it a singleton.

People are going to want to const theThing = require("the-thing.js"); that sucker. That’s the CommonJS format, which seems reasonable. But that’s kinda more for Node.js than the browser, so it also seems reasonable to use define() and return a function, otherwise known as AMD. Fortunatly there is UMD, which is like both of those at the same time.

Wait wait wait. ES6 has now firmly arrived and it has its own module format. People are for sure going to want to import { theThing } from './the-thing.js';, which means you’re going to need to export function theThing() { }.

So what should you do here? Heck if I know. I’m sure someone smart will say something in the comments.

Whatever you decide though, you’ll probably separate that „module” concern away from your authored code. Perhaps a build process can help get all that together for you. Perhaps you can offer your thing in all the different module formats, so everybody is happy. What do you use for that build process? Grunt? Gulp? Too old? The hip thing these days is an npm script, which is just some terminal commands put into a `package.json` file.

While your build processing, you might as well have that thing run your tests. You also might as well create an uglified version. Every library has a `the-thing.min.js`, right? You might as well put all that auto generated stuff into a `/dist/` folder. Maybe you’ll even be super hip and not check that into the repo. If you want this thing, you grab the repo and build it yourself!

Still, you do want people to use this thing. Having the public repo isn’t enough, it’s also going to need to go on npm, which is a whole different site and world. Hopefully, you’ve done some naming research early, as you’ll need to name your package and that must be unique. There is a variety of other stuff to know here too, like making sure the repo is clean and you’ve ignored things you don’t want everyone to download with the package.

Wait what about Yarn? Isn’t that the new npm or something? You might want to publish there too. Some folks are hanging onto Bower as well, so you might wanna make your thing work there too.

OK! Done! Better take a quick nap and get ready for the demands to start rolling in on the issues board!

Just kidding everything will be fine.

Let’s say you wanna open source a little thing… is a post from CSS-Tricks

More Gotchas Getting Inline SVG Into Production—Part II

Post pobrano z: More Gotchas Getting Inline SVG Into Production—Part II

The following is a guest post by Rob Levin and Chris Rumble. Rob and Chris both work on the product design team at Mavenlink. Rob is also creator and host of the SVG Immersion Podcast and wrote the original 5 Gotchas article back in ’14. Chris, is a UI and Motion Designer/Developer based out of San Francisco. In this article, they go over some additional issues they encountered after incorporating inline SVGs in to Mavenlink’s flagship application more then 2 years ago. The article illustrations were done by Rob and—in the spirit of our topic—are 100% vector SVGs!

Explorations in Debugging

Wow, it’s been over 2 years since we posted the 5 Gotchas Getting SVG Into Production article. Well, we’ve encountered some new gotchas making it time for another follow up post! We’ll label these 6-10 paying homage to the first 5 gotchas in the original post 🙂

Gotcha Six: IE Drag & Drop SVG Disappears

SVG Disappears After Drag and Drop in IE

If you take a look at the animated GIF above, you’ll notice that I have a dropdown of task icons on the left, I attempt to drag the row outside of the sortable’s container element, and then, when I drop the row back, the SVG icons have completely disappeared. This insidious bug didn’t seem to happen on Windows 7 IE11 in my tests, but, did happen in Windows 10’s IE11! Although, in our example, the issue is happening due to use of a combination of jQuery UI Sortable and the nestedSortable plugin (which needs to be able to drag items off the container to achieve the nesting, any sort of detaching of DOM elements and/or moving them in the DOM, etc., could result in this disappearing behavior. Oddly, I wasn’t able to find a Microsoft ticket at time of writing, but, if you have access to a Windows 10 / IE11 setup, you can see for yourself how this will happen in this simple pen which was forked from fergaldoyle. The Pen shows the same essential disappearing behavior happening, but, this time it’s caused by simply moving an element containing an SVG icon via JavaScript’s appendChild.

A solution to this is to reset the href.baseVal attribute on all <use> elements that descend from container element when a callback is called. For example, in the case of using Sortable, we were able to call the following method from inside Sortable’s stop callback:

function ie11SortableShim(uiItem) {
  function shimUse(i, useElement) {
    if (useElement.href && useElement.href.baseVal) {
      // this triggers fixing of href for IE
      useElement.href.baseVal = useElement.href.baseVal;

  if (isIE11()) {

I’ve left out the isIE11 implementation, as it can be done a number of ways (sadly, most reliably through sniffing the window.navigator.userAgent string and matching a regex). But, the general idea is, find all the <use> elements in your container element, and then reassign their href.baseVal to trigger to IE to re-fetch those external xlink:href‚s. Now, you may have an entire row of complex nested sub-views and may need to go with a more brute force approach. In my case, I also needed to do:


to rerender the row. Your mileage may vary 😉

If you’re experiencing this outside of Sortable, you likely just need to hook into some „after” event on whatever the parent/container element is, and then do the same sort of thing.

As I’m boggled by this IE11 specific issue, I’d love to hear if you’ve encountered this issue yourself, have any alternate solutions and/or greater understanding of the root IE issues, so do leave a comment if so.

Gotcha Seven: IE Performance Boosts Replacing SVG4Everybody with Ajax Strategy

Performance Issues

In the original article, we recommended using SVG4Everybody as a means of shimming IE versions that don’t support using an external SVG definitions file and then referencing via the xlink:href attribute. But, it turns out to be problematic for performance to do so, and probably more kludgy as well, since it’s based off user agent sniffing regex. A more „straight forward” approach, is to use Ajax to pull in the SVG sprite. Here’s a slice of our code that does this which is, essentially, the same as what you’ll find in the linked article:

  loadSprite = null;

  (function() {
    var loading = false;
    return loadSprite = function(path) {
      if (loading) {
      return document.addEventListener('DOMContentLoaded', function(event) {
        var xhr;
        loading = true;
        xhr = new XMLHttpRequest();'GET', path, true);
        xhr.responseType = 'document';
        xhr.onload = function(event) {
          var el, style;
          el = xhr.responseXML.documentElement;
          style =;
          style.display = 'none';
          return document.body.insertBefore(el, document.body.childNodes[0]);
        return xhr.send();

  module.exports = {
    loadSprite: loadSprite,

The interesting part about all this for us, was that—on our icon-heavy pages—we went from ~15 seconds down to ~1-2 seconds (for first uncached page hit) in IE11.

Something to consider about using the Ajax approach, you’ll need to potentially deal with a „flash of no SVG” until the HTTP request is resolved. But, in cases where you already have a heavy initial loading SPA style application that throws up a spinner or progress indicator, that might be a sunk cost. Alternatively, you may wish to just go ahead and inline your SVG definition/sprite and take the cache hit for better-percieved performance. If so, measure just how much you’re increasing the payload.

Gotcha Eight: Designing Non-Scaling Stroke Icons

In cases where you want to have various sizes of the same icon, you may want to lock down the stroke sizes of those icons…

Why, what’s the issue?

Strokes VS Fills

Imagine you have a height: 10px; width: 10px; icon with some 1px shapes and scale it to 15px. Those 1px shapes will now be 1.5px which ends up creating a soft or fuzzy icon due to borders being displayed on sub-pixel boundaries. This softness also depends on what you scale to, as that will have a bearing on whether your icons are on sub-pixel boundaries. Generally, it’s best to control the sharpness of your icons rather than leaving them up to the will of the viewer’s browser.

The other problem is more of a visual weight issue. As you scale a standard icon using fills, it scales proportionately…I can hear you saying „SVG’s are supposed to that”. Yes, but being able to control the stroke of your icons can help them feel more related and seen as more of a family. I like to think of it like using a text typeface for titling, rather than a display or titling typeface, you can do it but why when you could have a tight and sharp UI.

Prepping the Icon

I primarily use Illustrator to create icons, but plenty of tools out there will work fine. This is just my workflow with one of those tools. I start creating an icon by focusing on what it needs to communicate not really anything technical. After I’m satisfied that it solves my visual needs I then start scaling and tweaking it to fit our technical needs. First, size and align your icon to the pixel grid (⌘⌥Y in Illustrator for pixel preview, on a Mac) at the size you are going to be using it. I try to keep diagonals on 45° and adjust any curves or odd shapes to keep them from getting weird. No formula exists for this, just get it as close as you can to something you like. Sometimes I scrap the whole idea if it’s not gonna work at the size I need and start from scratch. If it’s the best visual solution but no one can identify it… it’s not worth anything.

Exporting AI

I usually just use the Export As „SVG” option in Illustrator, I find it gives me a standard and minimal place to start. I use the Presentation Attributes setting and save it off. It will come out looking something like this:

<svg id="Layer_1" data-name="Layer 1" xmlns="" width="18" height="18" viewBox="0 0 18 18">
  <polyline points="5.5 1.5 0.5 1.5 0.5 4.5 0.5 17.5 17.5 17.5 17.5 1.5 12.5 1.5" fill="none" stroke="#b6b6b6" stroke-miterlimit="10"/>
  <rect x="5.5" y="0.5" width="7" height="4" fill="none" stroke="#b6b6b6" stroke-miterlimit="10"/>
  <line x1="3" y1="4.5" x2="0.5" y2="4.5" fill="none" stroke="#b6b6b6" stroke-miterlimit="10"/>
  <line x1="17.5" y1="4.5" x2="15" y2="4.5" fill="none" stroke="#b6b6b6" stroke-miterlimit="10"/>
  <polyline points="6 10 8 12 12 8" fill="none" stroke="#ffa800" stroke-miterlimit="10" stroke-width="1"/>

I know you see a couple of 1/2 pixels in there! Seems like there are a few schools of thought on this. I prefer to have the stroke line up to the pixel grid as that is what will display in the end. The coordinates are placed on the 1/2 pixel so that your 1px stroke is 1/2 on each side of the path. It looks something like this (in Illustrator):

Strokes on the Pixel Grid

Gotcha Nine: Implementing Non-Scaling Stroke

Clean Up

Galactic Vacuum

Our Grunt task, which Rob talks about in the previous article, cleans up almost everything. Unfortunately for the non-scaling-stroke you have some hand-cleaning to do on the SVG, but I promise it is easy! Just add a class to the paths on which you want to restrict stroke scaling. Then, in your CSS add a class and apply the attribute vector-effect: non-scaling-stroke; which should look something like this:

.non-scaling-stroke {
  vector-effect: non-scaling-stroke;
<svg xmlns="" viewBox="0 0 18 18">
  <polyline class="non-scaling-stroke" points="5.5 1.5 0.5 1.5 0.5 4.5 0.5 17.5 17.5 17.5 17.5 1.5 12.5 1.5" stroke="#b6b6b6" stroke-miterlimit="10"/>
  <rect class="non-scaling-stroke" x="5.5" y="0.5" width="7" height="4" stroke="#b6b6b6" stroke-miterlimit="10"/>
  <line class="non-scaling-stroke" x1="3" y1="4.5" x2="0.5" y2="4.5" stroke="#b6b6b6" stroke-miterlimit="10"/>
  <line class="non-scaling-stroke" x1="17.5" y1="4.5" x2="15" y2="4.5" stroke="#b6b6b6" stroke-miterlimit="10"/>
  <polyline class="non-scaling-stroke" stroke="currentcolor" points="6 10 8 12 12 8" stroke="#ffa800" stroke-miterlimit="10" stroke-width="1"/>

This keeps the strokes, if specified, from changing (in other words, the strokes will remain at 1px even if the overall SVG is scaled) when the SVG is scaled. We also add fill: none; to a class in our CSS script where we also control the stroke color as they will fill with #000000 by default.That’s it! Now, you have beautiful pixel adherent strokes that will maintain stroke width!

And after all is said and done (and you have preprocessed via grunt-svgstore per the first article), your SVG will look like this in the defs file:

  <symbol viewBox="0 0 18 18" id="icon-task-stroke">
    <path class="non-scaling-stroke" stroke-miterlimit="10" d="M5.5 1.5h-5v16h17v-16h-5"/>
    <path class="non-scaling-stroke" stroke-miterlimit="10" d="M5.5.5h7v4h-7zM3 4.5H.5M17.5 4.5H15"/>
    <path class="non-scaling-stroke" stroke="currentColor" stroke-miterlimit="10" d="M6 10l2 2 4-4"/>

CodePen Example

The icon set on the left is scaling proportionately, and on the right, we are using the vector-effect: non-scaling-stroke;. If your noticing that your resized SVG icon’s strokes are starting to look out of control, the above technique will give you the ability to lock those babies down.

See the Pen SVG Icons: Non-Scaling Stroke by Chris Rumble (@Rumbleish) on CodePen.

Gotcha Ten: Accessibility

Accessible planet illustration

With everything involved in getting your SVG icon system up-and-running, it’s easy to overlook accessibility. That’s a shame, because SVGs are inherently accessible, especially if compared to icon fonts which are known to not always play well with screen readers. At bare minimum, we need to sprinkle a bit of code to prevent any text embedded within our SVG icons from being announced by screen readers. Although we’d love to just add a <title> tag with alternative text and „call it a day”, the folks at Simply Accessible have found that Firefox and NVDA will not, in fact, announce the <title> text.

Their recommendation is to apply the aria-hidden="true" attribute to the <svg> itself, and then add an adjacent span element with a .visuallyhidden class. The CSS for that visually hidden element will be hidden visually, but its text will available to the screen reader to announce. I’m bummed that it doesn’t feel very semantic, but it may be a reasonable comprimise while support for the more intuitively reasonable <title> tag (and combinations of friends like role, aria-labelledby, etc.) work across both browser and screen reader implementations. To my mind, the aria-hidden on the SVG may be the biggest win, as we wouldn’t want to inadvertantly set off the screen reader for, say, 50 icons on a page!

Here’s the general patterns borrowed but alterned a bit from Simply Accessible’s pen:

<a href="/somewhere/foo.html">
    <svg class="icon icon-close" viewBox="0 0 32 32" aria-hidden="true">
        <use xlink:href="#icon-close"></use>
    <span class="visuallyhidden">Close</span>

As stated before, the two things interesting here are:

  1. aria-hidden attribute applied to prevent screen readers from announcing any text embedded within the SVG.
  2. The nasty but useful visuallyhidden span which WILL be announced by screen reader.

Honestly, if you would rather just code this with the <title> tag et al approach, I wouldn’t necessarily argue with you as it this does feel kludgy. But, as I show you the code we’ve used that follows, you could see going with this solution as a version 1 implementation, and then making that switch quite easily when support is better…

Assuming you have some sort of centralized template helper or utils system for generating your use xlink:href fragments, it’s quite easy to implement the above. We do this in Coffeescript, but since JavaScript is more universal, here’s the code that gets resolved to:

  templateHelpers = {
    svgIcon: function(iconName, iconClasses, iconAltText) {
      var altTextElement = iconAltText ? "" + iconAltText + "" : '';
      var titleElement = iconTitle ? "<title>" + iconTitle + "</title>" : '';
      iconClasses = iconClasses ? " " + iconClasses : '';
      return, "<svg aria-hidden='true' class='icon-new " + iconClasses + "'><use xlink:href='#" + iconName + "'>" + titleElement + "</use></svg>" + altTextElement);

Why are we putting the <title> tag as a child of <use> instead of the <svg>? According to Amelia Bellamy-Royds(Invited Expert developing SVG & ARIA specs @w3c. Author of SVG books from @oreillymedia), you will get tooltips in more browsers.

Here’s the CSS for .visuallyhidden. If you’re wondering why we’re doing it this particular why and not, say, display: none;, or other familiar means, see Chris Coyier’s article which explains this in depth:

.visuallyhidden {
    border: 0;
    clip: rect(0 0 0 0);
    height: 1px;
    width: 1px;
    margin: -1px;
    padding: 0;
    overflow: hidden;
    position: absolute;

This code is not meant to be used „copy pasta” style, as your system will likely have nuanced differences. But, it shows the general approach, and, the important bits are:

  • the iconAltText, which allows the caller to provide alternative text if it seems appropriate (e.g. the icon is not purely decorative)
  • the aria-hidden="true" which now, is always placed on the SVG element.
  • the .visuallyhidden class will hide the element visually, while still making the text in that element available for screen readers

As you can see, it’d be quite easy to later refactor this code to use the <title> approach usually recommended down the road, and at least the maintainence hit won’t be bad should we choose to do so. The relevant refactor changes would likely be similar to:

var aria = iconAltText ? 'role="img" aria-label="' + iconAltText + '"' : 'aria-hidden="true"';
return, "<svg " + aria + " class='icon-new " + iconClasses + "'><use xlink:href='#" + iconName + "'>" + titleElement + "</use></svg>");

So, in this version (credit to Amelia for the aria part!), if the caller passes alternative text in, we do NOT hide the SVG, and, we also do not use the visually hidden span technique, while adding the role and aria-label attributes to the SVG. This feels much cleaner, but the jury is out on whether screen readers are going to support this approach as well as using the visually hidden span technique. Maybe the experts (Amelia and Simply Accessible folks), will chime in on the comments 🙂

Bonus Gotcha: make viewBox width and height integers or scaling gets funky

If you have an SVG icon that you export with a resulting viewBox like: viewBox="0 0 100 86.81", you may have issues if you use transform: scale. For example, if your generally setting the width and height equal as is typical (e.g. 16px x 16px), you might expect that the SVG should just center itself in it’s containing box, especially if you’re using the defaults for preserveAspectRatio. But, if you attempt to scale it at all, you’ll start to notice clipping.

In the following Adobe Illustrator screen capture, you see that „Snap to Grid” and „Snap to Pixel” are both selected:

Align and Snap to Pixel Grid

The following pen shows the first three icons getting clipped. This particular icon (it’s defined as a <symbol> and then referenced using the xlink:href strategy we’ve already went over), has a viewBox with non-integer height of 86.81, and thus we see the clipping on the sides. The next 3 examples (icons 4-6), have integer width and heights (the third argument to viewBox is width and the fourth is height), and does not clip.

See the Pen SVG Icons: Scale Clip Test 2 by Rob Levin (@roblevin) on CodePen.


The above challenges are just some of the ones we’ve encountered at Mavenlink having had a comprehensive SVG icon system in our application for well over 2 years now. The mysterious nature of some of these is par for the course given our splintered world of various browsers, screen readers, and operating systems. But, perhaps these additional gotchas will help you and your team to better harden your SVG icon implementations!

More Gotchas Getting Inline SVG Into Production—Part II is a post from CSS-Tricks

Glue Cross-Browser Responsive Irregular Images with Sticky Tape

Post pobrano z: Glue Cross-Browser Responsive Irregular Images with Sticky Tape

I recently came across this Atlas of Makers by Vasilis van Gemert. Its fun and quirky appearance made me look under the hood and it was certainly worth it! What I discovered is that it was actually built making use of really cool features that so many articles and talks have been written about over the past few years, but somehow don’t get used that much in the wild – the likes of CSS Grid, custom properties, blend modes, and even SVG.

SVG was used in order to create the irregular images that appear as if they were glued onto the page with the pieces of neon sticky tape. This article is going to explain how to recreate that in the simplest possible manner, without ever needing to step outside the browser. Let’s get started!

The first thing we do is pick an image we start from, for example, this pawesome snow leopard:

Fluffy snow leopard walking through the snow, looking up at the camera with an inquisitive face.
The image we’ll be using: a fluffy snow leopard.

The next thing we do is get a rough polygon we can fit the cat in. For this, we use Bennett Feely‚s Clippy. We’re not actually going to use CSS clip-path since it’s not cross-browser yet (but if you want it to be, please vote for it – no sign in required), it’s just to get the numbers for the polygon points really fast without needing to use an actual image editor with a ton of buttons and options that make you give up before even starting.

We set the custom URL for our image and set custom dimensions. Clippy limits these dimensions based on viewport size, but for us, in this case, the actual dimensions of the image don’t really matter (especially since the output is only going to be % values anyway), only the aspect ratio, which is 2:3 in the case of our cat picture.

Clippy screenshot showing from where to set custom dimensions and URL. The custom dimensions need to satisfy a 2:3 ratio so we pick 540 = 2*270 for the width and 810 = 3*270 for the height.
Clippy: setting custom dimensions and URL.

We turn on the „Show outside clip-path” option on to make it easier to see what we’ll be doing.

Animated gif. Illustrates turning on the 'Show outside clip-path' option. This is going to be useful later, while picking the vertices of the polygon that roughly clips the image around the cat, so that we can see outside the partial polygon we have before we get all the points.
Clippy: turning on the „Show outside clip-path” option.

We then choose to use a custom polygon for our clipping path, we select all the points, we close the path and then maybe tweak some of their positions.

Clippy: selecting the points of a custom polygon that very roughly approximates the shape of the cat.

This has generated the CSS clip-path code for us. We copy just the list of points (as % values), bring up the console and paste this list of points as a JavaScript string:

let coords = '69% 89%, 84% 89%, 91% 54%, 79% 22%, 56% 14%, 45% 16%, 28% 0, 8% 0, 8% 10%, 33% 33%, 33% 70%, 47% 100%, 73% 100%';

We get rid of the % characters and split the string:

coords = coords.replace(/%/g, '').split(', ').map(c => c.split(' '));

We then set the dimensions for our image:

let dims = [736, 1103];

After that, we scale the coordinates we have to the dimensions of our image. We also round the values we get because we’re sure not to need decimals for a rough polygonal approximation of the cat in an image that big.

coords = =>, i) => Math.round(.01*dims[i]*c)));

Finally, we bring this to a form we can copy from dev tools:

`[${ => `[${c.join(', ')}]`).join(', ')}]`;
Screenshot of the steps described above in the dev tools console.
Screenshot of the steps above in the dev tools console.

Now we move on to generating our SVG with Pug. Here’s where we use the array of coordinates we got at the previous step:

- var coords = [[508, 982], [618, 982], [670, 596], [581, 243], [412, 154], [331, 176], [206, 0], [59, 0], [59, 110], [243, 364], [243, 772], [346, 1103], [537, 1103]];
- var w = 736, h = 1103;

svg(viewBox=[0, 0, w, h].join(' '))
    polygon(points=coords.join(' '))
        width=w height=h 

This gives us the irregular shaped image we’ve been after:

See the Pen by thebabydino (@thebabydino) on CodePen.

Now let’s move on to the pieces of sticky tape. In order to generate them, we use the same array of coordinates. Before doing anything else at this step, we read its length so that we can loop through it:

-// same as before
- var n = coords.length;

svg(viewBox=[0, 0, w, h].join(' '))
  -// same as before
  - for(var i = 0; i < n; i++) {
  - }

Next, within this loop, we have a random test to decide whether we have a strip of sticky tape from the current point to the next point:

- for(var i = 0; i < n; i++) {
  - if(Math.random() > .5) {
    path(d=`M${coords[i]} ${coords[(i + 1)%n]}`)
  - }
- }

At first sight, this doesn’t appear to do anything.

However, this is because the default stroke is none. Making this stroke visible (by setting it to an hsl() value with a randomly generated hue) and thicker reveals our sticky tape:

stroke: hsl(random(360), 90%, 60%);
stroke-width: 5%;
mix-blend-mode: multiply

We’ve also set mix-blend-mode: multiply on it so that overlap becomes a bit more obvious.

See the Pen by thebabydino (@thebabydino) on CodePen.

Looks pretty good, but we still have a few problems here.

The first and most obvious one being that this isn’t cross-browser. mix-blend-mode doesn’t work in Edge (if you want it, don’t forget to vote for it). The way we can get a close enough effect is by making the stroke semitransparent just for Edge.

My initial idea here was to do this in a way that’s only supported in Edge for now: using a calc() value whose result isn’t an integer for the RGB components. The problem is that we have an hsl() value, not an rgb() one. But since we’re using Sass, we can extract the RGB components:

$c: hsl(random(360), 90%, 60%);
stroke: $c;
stroke: rgba(calc(#{red($c)} - .5), green($c), blue($c), .5)

The last rule is the one that gets applied in Edge, but is discarded due to the calc() result in Chrome and simply due to the use of calc() in Firefox, so we get the result we want this way.

Screenshot of the Chrome (left) and Firefox (right) dev tools showing how the second stroke rule is seen as invalid and discarded by both browsers
The second stroke rule seen as invalid in Chrome (left) and Firefox (right) dev tools.

However, this won’t be the case anymore if the other browsers catch up with Edge here.

So a more future-proof solution would be to use @supports:

path {
  $c: hsl(random(360), 90%, 60%);
  stroke: rgba($c, .5);

  @supports (mix-blend-mode: multiply) { stroke: $c }

The second problem is that we want our strips to expand a bit beyond their end points. Fortunately, this problem has a straightforward fix: setting stroke-linecap to square. This effectively makes our strips extend by half a stroke-width beyond each of their two ends.

See the Pen by thebabydino (@thebabydino) on CodePen.

The final problem is that our sticky strips get cut off at the edge of our SVG. Even if we set the overflow property to visible on the SVG, the container our SVG is in might cut it off anyway or an element coming right after might overlap it.

So what we can try to do is increase the viewBox space all around the image by an amount we’ll call p that’s just enough to fit our sticky tape strips.

-// same as before
- var w1 = w + 2*p, h1 = h + 2*p;

svg(viewBox=[-p, -p, w1, h1].join(' '))
  -// same as before

The question here is… how much is that p amount?

Well, in order to get that value, we need to take into account the fact that our stroke-width is a % value. In SVG, a % value for something like the stroke-width is computed relative to the diagonal of the SVG region. In our case, this SVG region is a rectangle of width w and height h. If we draw the diagonal of this rectangle, we see that we can compute it using Pythagora’s theorem in the yellow highlighted triangle.

SVG illustration showing the SVG rectangle and highlighting half of it - a right triangle where the catheti are the SVG viewBox width and height and the hypotenuse is the SVG diagonal.
The diagonal of the SVG rectangle can be computed from a right triangle where the catheti are the SVG viewBox width and height.

So our diagonal is:

- var d = Math.sqrt(w*w + h*h);

From here, we can compute the stroke-width as 5% of the diagonal. This is equivalent to multiplying the diagonal (d) with a .05 factor:

- var f = .05, sw = f*d;

Note that this is moving from a % value (5%) to a value in user units (.05*d). This is going to be convenient as, by increasing the viewBox dimensions we also increase the diagonal and, therefore, what 5% of this diagonal is.

The stroke of any path is drawn half inside, half outside the path line. However, we need to increase the viewBox space by more than half a stroke-width. We also need to take into account the fact that the stroke-linecap extends beyond the endpoints of the path by half a stroke-width:

SVG illustration showing how the stroke is drawn half on one side of the path line, half on the other side and how a square linecap causes it to expand by half a stroke-width beyond its endpoints.
The effect of stroke-width and stroke-linecap: square.

Now let’s consider the situation when a point of our clipping polygon is right on the edge of our original image. We only consider one of the polygonal edges that have an end at this point in order to simplify things (everything is just the same for the other one).

We take the particular case of a strip along a polygonal edge having one end E on the top edge of our original image (and of the SVG as well).

Screenshot showing the result of the latest demo and highlighting an edge which has an end on the top boundary of the original image.
Highlighting a polygonal edge which has one endpoint on the top boundary of the original image.

We want to see by how much this strip can extend beyond the top edge of the image in the case when it’s created with a stroke and the stroke-linecap is set to square. This depends on the angle formed with the top edge and we’re interested in finding the maximum amount of extra space we need above this top boundary so that no part of our strip gets cut off by overflow.

In order to understand this better, the interactive demo below allows us to rotate the strip and get a feel for how far the outer corners of the stroke creating this strip (including the square linecap) can extend:

See the Pen by thebabydino (@thebabydino) on CodePen.

As the demo above illustrates by tracing the position of the outer corners of the stroke including the stroke-linecap, the maximum amount of extra space needed beyond the image boundary is when the line segment between the endpoint on the edge E and the outer corner of the stroke including the linecap at that endpoint (this outer corner being either A or B, depending on the angle) is vertical and this amount is equal to the length of the segment.

Given that the stroke extends by half a stroke-width beyond the end point, both in the tangent and in the normal direction, it results that the length of this line segment is the hypotenuse of a right isosceles triangle whose catheti each equal half a stroke-width:

SVG illustration showing how to get the segment between the path's endpoint and one of the outer corners of the stroke including the linecap: it's the hypotenuse in an isosceles triangle where the catheti are half a stroke width, along the normal direction because half the stroke is on one side of the path line while the other half is on the other and along the tangent direction because we have square linecap.
The segment connecting the endpoint to the outer corner of the stroke including the linecap is the hypotenuse in a right isosceles triangle where the catheti are half a stroke-width.

Using Pythagora’s theorem in this triangle, we have:

- var hw = .5*sw;
- var p = Math.sqrt(hw*hw + hw*hw) = hw*Math.sqrt(2);

Putting it all together, our Pug code becomes:

/* same coordinates and initial dimensions as before */
- var f = .05, d = Math.sqrt(w*w + h*h);
- var sw = f*d, hw = .5*sw;
- var p = +(hw*Math.sqrt(2)).toFixed(2);
- var w1 = w + 2*p, h1 = h + 2*p;

svg(viewBox=[-p, -p, w1, h1].join(' ') 
    style=`--sw: ${+sw.toFixed(2)}px`)
  /* same as before */

while in the CSS we’re left to tweak that stroke-width for the sticky strips:

stroke-width: var(--sw);

Note that we cannot make --sw a unitless value in order to then set the stroke-width to calc(var(--sw)*1px) – while in theory this should work, in practice Firefox and Edge don’t yet support using calc() values for stroke-* properties.

The final result can be seen in the following Pen:

See the Pen by thebabydino (@thebabydino) on CodePen.

Glue Cross-Browser Responsive Irregular Images with Sticky Tape is a post from CSS-Tricks

Let’s Talk About Speech CSS

Post pobrano z: Let’s Talk About Speech CSS

Boston, like many large cities, has a subway system. Commuters on it are accustomed to hearing regular public address announcements.

Riders simply tune out some announcements, such as the pre-recorded station stop names repeated over and over. Or public service announcements from local politicians and celebrities—again, kind of repetitive and not worth paying attention to after the first time. Most important are service alerts, which typically deliver direct and immediate information riders need to take action on.

An informal priority

A regular rider’s ear gets trained to listen for important announcements, passively, while fiddling around on a phone or zoning out after a hard day of work. It’s not a perfect system—occasionally I’ll find myself trapped on a train that’s been pressed into express service.

But we shouldn’t remove lower priority announcements. It’s unclear what kind of information will be important to whom: tourists, new residents, or visiting friends and family, to name a few.

A little thought experiment: Could this priority be more formalized via sound design? The idea would be to use different voices consistently or to prefix certain announcements with specific tones. I’ve noticed an emergent behavior from the train operators that kind of mimics this: Sometimes they’ll use a short blast of radio static to get riders’ attention before an announcement.


I’ve been wondering if this kind of thinking can be extended to craft better web experiences for everyone. After all, sound is enjoying a renaissance on the web: the Web Audio API has great support, and most major operating systems now ship with built-in narration tools. Digital assistants such as Siri are near-ubiquitous, and podcasts and audiobooks are a normal part of people’s media diet.

Deep in CSS’—ahem—labyrinthine documentation, are references to two Media Types that speak to the problem: aural and speech. The core idea is pretty simple: audio-oriented CSS tells a digitized voice how it should read content, the same as how regular CSS tells the browser how to visually display content. Of the two, aural has been deprecated. speech Media Type detection is also tricky, as a screen reader potentially may not communicate its presence to the browser.

The CSS 3 Speech Module, the evolved version of the aural Media Type, looks the most promising. Like display: none;, it is part of the small subset of CSS that has an impact on screen reader behavior. It uses traditional CSS property/value pairings alongside existing declarations to create an audio experience that has parity with the visual box model.

code {
  background-color: #292a2b;
  color: #e6e6e6;
  font-family: monospace;
  speak: literal-punctuation; /* Reads all punctuation out loud in iOS VoiceOver */

Just because you can, doesn’t mean you should

In his book Building Accessible Websites, published in 2003, author/journalist/accessibility researcher Joe Clark outlines some solid reasons for never altering the way spoken audio is generated. Of note:


Many browsers don’t honor the technology, so writing the code would be a waste of effort. Simple and direct.


Clark argues that developers shouldn’t mess with the way spoken content is read because they lack training to „craft computer voices, position them in three-dimensional space, and specify background music and tones for special components.”

This may be the case for some, but ours is an industry of polymaths. I’ve known plenty of engineers who develop enterprise-scale technical architecture by day and compose music by night. There’s also the fact that we’ve kind of already done it.

The point he’s driving at—crafting an all-consuming audio experience is an impossible task—is true. But the situation has changed. An entire audio universe doesn’t need to be created from whole cloth any more. Digitized voices are now supplied by most operating systems, and the number of inexpensive/free stock sound and sound editing resources is near-overwhelming.


For users of screen readers, the reader’s voice is their interface for the web. As a result, users can be very passionate about their screen reader’s voice. In this light, Clark argues for not changing how a screen reader sounds out content that is not in its dictionary.

Screen readers have highly considered defaults for handling digital interfaces, and probably tackle content types many developers would not even think to consider. For example, certain screen readers use specialized sound cues to signal behaviors. NVDA uses a series of tones to communicate an active progress bar:

Altering screen reader behavior effectively alters the user’s expected experience. Sudden, unannounced changes can be highly disorienting and can be met with fear, anger, and confusion.

A good parallel would be if developers were to change how a mouse scrolls and clicks on a per-website basis. This type of unpredictability is not a case of annoying someone, it’s a case of inadvertently rendering content more difficult to understand or changing default operation to something unfamiliar.

My voice is not your voice

A screen reader’s voice is typically tied to the region and language preference set in the operating system.

For example, iOS contains a setting for not just for English, but for variations that include United Kingdom, Ireland, Singapore, New Zealand and five others. A user picking UK English will, among other things, find their Invert Colors feature renamed to „Invert Colours.”

However, a user’s language preference setting may not be their primary language, the language of their country of origin, or the language of the country they’re currently living in. My favorite example is my American friend who set the voice on his iPhone to UK English to make Siri sound more like a butler.

UK English is also an excellent reminder that regional differences are a point of consideration, y’all.

Another consideration is biological and environmental hearing loss. It can manifest with a spectrum of severity, so the voice-balance property may have the potential to „move” the voice outside of someone’s audible range.

Also, the speed the voice reads out content may be too fast for some or too slow for others. Experienced screen reader operators may speed up the rate of speech, much as some users quickly scroll a page to locate information they need. A user new to screen readers, or a user reading about an unfamiliar topic may desire a slower speaking rate to keep from getting overwhelmed.

And yet

Clark admits that some of his objections exist only in the realm of the academic. He cites the case of a technologically savvy blind user who uses the power of CSS’ interoperability to make his reading experience pleasant.

According to my (passable) research skills, not much work has been done in asking screen reader users their preferences for this sort of technology in the fourteen years since the book was published. It’s also important to remember that screen reader users aren’t necessarily blind, nor are they necessarily technologically illiterate.

The idea here would be to treat CSS audio manipulation as something a user can opt into, either globally or on a per-site basis. Think web extensions like Greasemonkey/Tampermonkey, or when websites ask permission to send notifications. It could be as simple as the kinds of preference toggles users are already used to interacting with:

A fake NVDA screenshot
A fake screenshot simulating a preference in NVDA that would allow the user to enable or disable CSS audio manipulation.

There is already a precedent for this. Accessibility Engineer Léonie Watson notes that JAWS—another popular screen reader—“has a built in sound scheme that enables different voices for different parts of web pages. This suggests that perhaps there is some interest in enlivening the audio experience for screen reader users.”

Opt-in also supposes features such as whitelists to prevent potential abuses of CSS-manipulated speech. For example, a user could only allow certain sites with CSS-manipulated content to be read, or block things like unsavory ad networks who use less-than-scrupulous practices to get attention.

Opinions: I have a few

In certain situations a screen reader can’t know the context of content but can accept a human-authored suggestion on how to correctly parse it. For example, James Craig’s 2011 WWDC video outlines using speak-as values to make street names and code read accurately (starts at the 15:36 mark, requires Safari to view).

In programming, every symbol counts. Being able to confidently state the relationship between things in code is a foundational aspect of programming. The case of thisOne != thisOtherOne being read as „this one is equal to this other one” when the intent was „this one is not equal to this other one” is an especially compelling concern.

Off the top of my head, other examples where this kind of audio manipulation would be desirable are:

  • Ensuring names are pronounced properly.
  • Muting pronunciation of icons (especially icons made with web fonts) in situations where the developer can’t edit the HTML.
  • Using sound effect cues for interactive components that the screen reader lacks built-in behaviors for.
  • Creating a cloud-synced service that stores a user’s personal collection of voice preferences and pronunciation substitutions.
  • Ability to set a companion voice to read specialized content such as transcribed interviews or code examples.
  • Emoting. Until we get something approaching EmotionML support, this could be a good way to approximate the emotive intent of the author (No, emoji don’t count).
  • Spicing things up. If a user can’t view a website’s art direction, their experience relies on the skill of the writer or sound editor—on the internet this can sometimes leave a lot to be desired.

The reality of the situation

The CSS Speech Module document was last modified in March 2012. VoiceOver on iOS implements support using the following speak-as values for the speak property, as shown in this demo by accessibility consultant Paul J. Adam:

  • normal
  • digits
  • literal-punctuation
  • spell-out

Apparently, the iOS accessibility features Speak Selection and Speak Screen currently do not honor these properties.

Despite the fact that CSS 3 Speech Module has to be ratified (and therefore is still subject to change), VoiceOver support signals that a de facto standard has been established. The popularity of iOS—millions of devices, 76% of which run the latest version of iOS—makes implementation worth considering. For those who would benefit from the clarity provided by these declarations, it could potentially make a big difference.

Be inclusive, be explicit

Play to CSS’ strengths and make small, surgical tweaks to website content to enhance the overall user experience, regardless of device or usage context. Start with semantic markup and a Progressive Enhancement mindset. Don’t override pre-existing audio cues for the sake of vanity. Use iOS-supported speak-as values to provide clarity where VoiceOver’s defaults need an informed suggestion.

Writing small utility classes and applying them to semantically neutral span tags wrapped around troublesome content would be a good approach. Here’s a recording of VoiceOver reading this CodePen to demonstrate:

Take care to extensively test to make sure these tweaks don’t impair other screen reading software. If you’re not already testing with screen readers, there’s no time like the present to get started!

Unfortunately, current support for CSS speech is limited. But learning what it can and can’t do, and the situations in which it could be used is still vital for developers. Thoughtful and well-considered application of CSS is a key part of creating robust interfaces for all users, regardless of their ability or circumstance.

Let’s Talk About Speech CSS is a post from CSS-Tricks

Jekyll Includes are Cool

Post pobrano z: Jekyll Includes are Cool

Dave Rupert:

When cruising through the Includes documentation I noticed a relatively new feature where you can pass data into a component.

I was similarly excited learning about Macros in Nunjucks. Then:

After a couple days of writing includes like this I thought to myself „Why am I not just writing Web Components?”

Direct Link to ArticlePermalink

Jekyll Includes are Cool is a post from CSS-Tricks

Designed Lines

Post pobrano z: Designed Lines

Ethan Marcotte on digital disenfranchisement and why we should design lightning fast, accessible websites:

We’re building on a web littered with too-heavy sites, on an internet that’s unevenly, unequally distributed. That’s why designing a lightweight, inexpensive digital experience is a form of kindness. And while that kindness might seem like a small thing these days, it’s a critical one. A device-agnostic, data-friendly interface helps ensure your work can reach as many people as possible, regardless of their location, income level, network quality, or device.

Direct Link to ArticlePermalink

Designed Lines is a post from CSS-Tricks

Why Use a Third-Party Form Validation Library?

Post pobrano z: Why Use a Third-Party Form Validation Library?

We’ve just wrapped up a great series of posts from Chris Ferdinandi on modern form validation. It starts here. These days, browsers have quite a few built-in tools for handling form validation including HTML attributes that can do quite a bit on their own, and a JavaScript API that can do even more. Chris even showed us that with a litttttle bit more work we can get down to IE 9 support with ideal UX.

So what’s up with third-party form validation libraries? Why would you use a library for something you can get for free?

You need deeper browser support.

All „modern” browsers + IE 9 down is pretty good, especially when you’ve accounted for cross-browser differences nicely as Chris did. But it’s not inconcievable that you need to go even deeper.

Libraries like Parsley go down a smidge further, to IE 8.

You’re using a JavaScript framework that doesn’t want you touching the DOM yourself.

When you’re working with a framework like React, you aren’t really attaching event handlers or inserting anything into the DOM manually at all. You might be passing values to a form element via props and setting error data in state.

You can absolutely do all that with native form validation and native constraint validation, but it also makes sense why you might reach for an add-on designed for that. You might reach for a package like react-validation that helps out in that world. Or Formik, which looks to be built just for this kind of thing:

Likewise, there is:

The hope is that these libraries are nice and lightweight because the take advantage of the native API’s when they can. I’ll leave that to you to look into when you need to reach for this kind of thing.

You’re compelled by the API.

One of the major reasons any framework enjoys success is because of API nicety. If it makes your code easy to understand and work on, that’s a big deal.

Sometimes the nicest JavaScript API’s start with HTML attributes for activation and configuration. Remember that native HTML constraint validation is all about HTML attributes controlling form validation, so ideally any third-party form validation library would use them in the spirit of progressive enhancement.

You’re compelled by the integrations.

They promised you it „Works with Bootstrap!” and your project uses Bootstrap, so that seemed like a good fit. I get it, but this is my least favorite reason. In this context, the only thing Bootstrap would care about is a handful of class names and where you stick the divs.

It validates more than the browser offers.

The browser can validate if an email address is technically valid, but not if it bounces or not. The browser can validate if a zip code looks like a real zip code, but not if it actually exists or not. The browser can validate if a URL is a proper URL, but not if it resolves or not. A third-party lib could potentially do stuff like this.

You’re compelled by fancy features.

  • Perhaps the library offers a feature where when there is an error, it scrolls the form to that first error.
  • Perhaps the library offers a captcha feature, which is a related concept to form validation, and you need it.
  • Perhaps the library offers an event system that you like. It publishes events when certain things happen on the form, and that’s useful to you.
  • Perhaps the library not only validates the form, but creates a summary of all the errors to show the user.

These things are a little above and beyond straight up form validation. You could do all of this with native validation, but I could see how this would drive appoption of a third-party library.

You need translation.

Native browser validation messages (the default kind that come with the HTML attributes) are in the language that browser is in. So the French version of Firefox spits out messages in French, despite the language of the page itself:

Third-party form validation libraries can ship with language packs that help with this. FormValidation is an example.


I’m not recommending a form validation library. In fact, if anything, the opposite.

I imagine that third-party form validation libraries are going to fall away a bit as browser support and UX gets better and better for the native APIs.

Or (and I imagine many already do this), internally they start using native APIs more and more, then offer nice features on top of the validation itself.

Why Use a Third-Party Form Validation Library? is a post from CSS-Tricks

CSS is Awesome

Post pobrano z: CSS is Awesome

I bought this mug recently for use at work. Being a professional web developer, I decided it would establish me as the office’s king of irony. The joke on it isn’t unique, of course. I’ve seen it everywhere from t-shirts to conference presentations.

Most of you reading this have probably encountered this image at least once. It’s a joke we can all relate to, right? You try and do something simple with CSS, and the arcane ways in which even basic properties interact inevitably borks it up.

If this joke epitomizes the collective frustration that developers have with CSS, then at the risk of ruining the fun, I thought it would be interesting to dissect the bug at its heart, as a case study in why people get frustrated with CSS.

The problem

See the Pen CSS is Awesome by Brandon (@brundolf) on CodePen.

There are three conditions that have to be met for this problem to occur:

  • The content can’t shrink to fit the container
  • The container can’t expand to fit the content
  • The container doesn’t handle overflow gracefully

In real-world scenarios, the second condition is most likely the thing that needs to be fixed, but we’ll explore all three.

Fixing the content size

This is little bit unfair to the box’s content because the word AWESOME can’t fit on one line at the given font size and container width. By default, text wraps at white space and doesn’t break up words. But let’s assume for a moment that we absolutely cannot afford to change the container’s size. Perhaps, for instance, the text is a blown-up header on a site that’s being viewed on an especially small phone.

Breaking up words

To get a continuous word to wrap, we have to use the CSS property word-break. Setting it to break-all will instruct the browser to break up words if necessary to wrap text content within its container.

See the Pen CSS is Awesome: word-break by Brandon (@brundolf) on CodePen.

Other content

In this case, the only way to make the content more responsive was to enable word breaking. But there are other kinds of content that might be overflowing. If word-wrap were set to nowrap, the text wouldn’t even wrap in-between words. Or, the content could be a block-level element, whose width or min-width is set to be greater than the container’s width.

Fixing the container size

There are many possible ways the container element might have been forced to not grow. For example: width, max-width, and flex. But the thing they all have in common, is that the width is being determined by something other than its content. This isn’t inherently bad, especially since there is no fixed height, which in most cases would cause the content to simply expand downwards. But if you run into a variation on this situation, it’s worth considering whether you really need to be controlling the width, or whether it can be left up to the page to determine.

Alternatives to setting width

More often than not, if you set an element’s width, and you set it in pixels, you really meant to set either min-width or max-width. Ask yourself what you really care about. Was this element disappearing entirely when it lacked content because it shrunk to a width of 0? Set min-width, so that it has dimension but still has room to grow. Was it getting so wide that a whole paragraph fit on one line and was hard to read? Set max-width, so it won’t go beyond a certain limit, but also won’t extend beyond the edge of the screen on small devices. CSS is like an assistant: you want to guide it, not dictate its every move.

Overflow caused by flexbox

If one of your flex items has overflowing content, things get a little more complicated. The first thing you can do is check if you’re specifying its width, as in the previous section. If you aren’t, probably what’s happening is the element is „flex-shrinking”. Flex items first get sized following the normal rules; width, content, etc. The resulting size is called their flex-basis (which can also be set explicitly with a property of the same name). After establishing the flex basis for each item, flex-grow and flex-shrink are applied (or flex, which specifies both at once). The items grow and shrink in a weighted way, based on these two values and the container’s size.

Setting flex-shrink: 0 will instruct the browser that this item should never get smaller than its flex basis. If the flex basis is determined by content (the default), this should solve your problem. be careful with this, though. You could end up running into the same problem again in the element’s parent. If this flex item refuses to shrink, even when the flex container is smaller than it, it’ll overflow and you’re back to square one.

Handling overflow

Sometimes there’s just no way around it. Maybe the container width is limited by the screen size itself. Maybe the content is a table of data, with rows that can’t be wrapped and columns that can’t be collapsed any further. We can still handle the overflow more gracefully than just having it spill out wherever.

overflow: hidden;

The most straightforward solution is to hide the content that’s overflowing. Setting overflow: hidden; will simply cut things off where they reach the border of the container element. If the content is of a more aesthetic nature and doesn’t include critical info, this might be acceptable.

See the Pen CSS is Awesome: overflow:hidden by Brandon (@brundolf) on CodePen.

If the content is text, we can make this a little more visually appealing by adding text-overflow: ellipsis;, which automatically adds a nice little „…” to text that gets cut off. It is worth noting, though, that you’ll see slightly less of the actual content to make room for the ellipsis. Also note that this requires overflow: hidden; to be set.

See the Pen CSS is Awesome: ellipsis by Brandon (@brundolf) on CodePen.

overflow: auto;

The preferable remedy is usually going to be setting overflow-x: auto;. This gives the browser the go-ahead to add a scroll bar if the content overflows, allowing the user to scroll the container in that direction.

See the Pen CSS is Awesome: overflow:auto by Brandon (@brundolf) on CodePen.

This is a particularly graceful fallback, because it means that no matter what, the user will be able to access all of the content. Plus, the scrollbar will only appear if it’s needed, which means it’s not a bad idea to add this property in key places, even if you don’t expect it to come into play.

Why does this conundrum resonate so universally with people who have used CSS?

CSS is hard because its properties interact, often in unexpected ways. Because when you set one of them, you’re never just setting that one thing. That one thing combines and bounces off of and contradicts with a dozen other things, including default things that you never actually set yourself.

One rule of thumb for mitigating this is, never be more explicit than you need to be. Web pages are responsive by default. Writing good CSS means leveraging that fact instead of overriding it. Use percentages or viewport units instead of a media query if possible. Use min-width instead of width where you can. Think in terms of rules, in terms of what you really mean to say, instead of just adding properties until things look right. Try to get a feel for how the browser resolves layout and sizing, and make your changes and additions on top of that judiciously. Work with CSS, instead of against it.

Another rule of thumb is to let either width or height be determined by content. In this case, that wasn’t enough, but in most cases, it will be. Give things an avenue for expansion. When you’re setting rules for how your elements get sized, especially if those elements will contain text content, think through the edge cases. „What if this content was pared down to a single character? What if this content expanded to be three paragraphs? It might not look great, but would my layout be totally broken?”

CSS is weird. It’s unlike any other code, and that makes a lot of programmers uncomfortable. But used wisely it can, in fact, be awesome.

CSS is Awesome is a post from CSS-Tricks