Archiwum kategorii: CSS

Using GraphQL Playground with Gatsby

Post pobrano z: Using GraphQL Playground with Gatsby

I’m assuming most of you have already heard about Gatsby, and at least loosely know that it’s basically a static site generator for React sites. It generally runs like this:

  1. Data Sources → Pull data from anywhere.
  2. Build → Generate your website with React and GraphQL.
  3. Deploy → Send the site to any static site host.

What this typically means is that you can get your data from any recognizable data source — CMS, markdown, file systems and databases, to name a few — manage the data through GraphQL to build your website, and finally deploy your website to any static web host (such as Netlify or Zeit).

Screenshot of the Gatsby homepage. It shows the three different steps of the Gatsby build process showing how data sources get built and then deployed.
The Gatsby homepage illustration of the Gatsby workflow.

In this article, we are concerned with the build process, which is powered by GraphQL. This is the part where your data is managed. Unlike traditional REST APIs where you often need to send anonymous data to test your endpoints, GraphQL consolidates your APIs into a self-documenting IDE. Using this IDE, you can perform GraphQL operations such as queries, mutations, and subscriptions, as well as view your GraphQL schema, and documentation.

GraphQL has an IDE built right into it, but what if we want something a little more powerful? That’s where GraphQL Playground comes in and we’re going to walk through the steps to switch from the default over to GraphQL Playground so that it works with Gatsby.

GraphiQL and GraphQL Playground

GraphiQL is GraphQL’s default IDE for exploring GraphQL operations, but you could switch to something else, like GraphQL Playground. Both have their advantages. For example, GraphQL Playground is essentially a wrapper over GraphiQL but includes additional features such as:

  • Interactive, multi-column schema documentation
  • Automatic schema reloading
  • Support for GraphQL Subscriptions
  • Query history
  • Configuration of HTTP headers
  • Tabs
  • Extensibility (themes, etc.)

Choosing either GraphQL Playground or GraphiQL most likely depends on whether you need to use those additional features. There’s no strict rule that will make you write better GraphQL operations, or build a better website or app.

This post isn’t meant to sway you toward one versus the other. We’re looking at GraphQL Playground in this post specifically because it’s not the default IDE and there may be use cases where you need the additional features it provides and needs to set things up to work with Gatsby. So, let’s dig in and set up a new Gatsby project from scratch. We’ll integrate GraphQL Playground and configure it for the project.

Setting up a Gatsby Project

To set up a new Gatsby project, we first need to install the gatsby-cli. This will give us Gatsby-specific commands we can use in the Terminal.

npm install -g gatsby-cli

Now, let’s set up a new site. I’ve decided to call this example „gatsby-playground” but you can name it whatever you’d like.

gatsby new gatsby-playground

Let’s navigate to the directory where it was installed.

cd gatsby-playground

And, finally, flip on our development server.

gatsby develop

Head over to http://localhost:8000 in the browser for the opening page of the project. Your Gatsby GraphQL operations are going to be located at http://localhost:8000/___graphql.

Screenshot of the starter page for a new Gatsby project. It says Welcome to your new Gatsby website. Now go build something great.
The GraphiQL interface. There are four panels from left to right showing the explorer, query variables and documentation.
The GraphiQL interface.

At this point, I think it’s worth calling out that there is a desktop app for GraphQL Playground. You could just access your Gatsby GraphQL operations with the URL Endpoint localhost:8000/___graphql without going any further with this article. But, we want to get our hands dirty and have some fun under the hood!

Screenshot of the GraphQL Playground interface. It has two panels showing the Gatsby GraphQL operations.
GraphQL Playground running Gatsby GraphQL Operations.

Gatsby’s Environmental Variables

Still around? Cool. Moving on.

Since we’re not going to be relying on the desktop app, we’ll need to do a little bit of Environmental Variable setup.

Environmental Variables are variables used specifically to customize the behavior of a website in different environments. These environments could be when the website is in active development, or perhaps when it is live in production and available for the world to see. We can have as many environments as we want, and define different Environmental Variables for each of the environments.

Learn more about Environmental Variables in Gatsby.

Gatsby supports two environments: development and production. To set a development environmental variable, we need to have a .env.development file at the root of the project. Same sort of deal for production, but it’s .env.production.

To swap out both environments, we’ll need to set an environment variable in a cross-platform compatible way. Let’s create a .env.development file at the root of the project. Here, we set a key/value pair for our variables. The key will be GATSBY_GRAPHQL_IDE, and the value will be playground, like so:

GATSBY_GRAPHQL_IDE=playground

Accessing Environment Variables in JavaScript

In Gatsby, our Environmental Variables are only available at build time, or when Node.JS is running (what we’ll call run time). Since the variables are loaded client-side at build time, we need to use them dynamically at run time. It is important that we restart our server or rebuild our website every time we modify any of these variables.

To load our environmental variables into our project, we need to install a package:

yarn add env-cmd --dev // npm install --save-dev env-cmd

With that, we will change the develop script in package.json as the final step, to this instead:

"develop": "env-cmd --file .env.development --fallback gatsby develop"

The develop script instructs the env-cmd package to load environmental variables from a custom environmental variable file (.env.development in this case), and if it can’t find it, fallback to .env (if you have one, so if you see the need to, create a .env file at the root of your project with the same content as .env.development).

And that’s it! But, hey, remember to restart the server since we change the variable.

yarn start // npm run develop

If you refresh the http://localhost:8000/___graphql in the browser, you should now see GraphQL playground. Cool? Cool!

GraphQL Playground with Gatsby.

And that’s how we get GraphQL Playground to work with Gatsby!

So that’s how we get from GraphQL’s default GraphiQL IDE to GraphQL Playground. Like we covered earlier, the decision of whether or not to make the switch at all comes down to whether the additional features offered in GraphQL Playground are required for your project. Again, we’re basically working with a GraphiQL wrapper that piles on more features.

Resources

Here are some additional articles around the web to get you started with Gatsby and more familiar with GraphiQL, GraphQL Playground, and Environment Variables.

The post Using GraphQL Playground with Gatsby appeared first on CSS-Tricks.

How I Created a Code Beautifier in Two Days

Post pobrano z: How I Created a Code Beautifier in Two Days

I recently drew up a wireframe for a code beautifier. The next day, I decided to turn it into a real tool. The whole project took less than two days to complete.

I’d been thinking about building a new code beautifier for a while. The idea isn’t unique, but every time I use someone else’s tool, I find myself reapplying the same settings and dodging advertisements every single time. 🤦🏻‍

I wanted a simple tool that worked well without the hassle, so last week I grabbed some paper and started sketching one out. I’m a huge fan of wireframing by hand. There’s just something about pencil and paper that makes the design part of my brain work better than staring at a screen.

I kicked off the design process by hand-drawing wireframes for the app.

I was immediately inspired after drawing the wireframe. The next day, I took a break from my usual routine to turn it into a something real. 👨🏻‍💻

Check it Out

The design

I knew I wanted the code editor to be the main focus of the tool, so I created a thin menu bar at the top that controls the mode (i.e. HTML, CSS, JavaScript) and settings. I eventually added an About button too.

The editor itself takes up most of the screen, but it blends in so you don’t really notice it. Instead of wasting space with instructions, I used a placeholder that disappears when you start typing.

The Dark Mode UI is based on a toggle that updates the styles.

At the bottom, I created a status bar that shows live stats about the code including the current mode, indentation settings, number of lines, number of characters, and document size in bytes. The right side of the status bar has a „Clear” and „Clean + Copy” button. The center has a logo shamelessly plugging my own service.

I don’t think many developers really code on phones, but I wanted this to work on mobile devices anyway. Aside from the usual responsive techniques, I had to watch the window size and adjust the tab position when the screen becomes too narrow.

I’m using flexbox and viewport units for vertical sizing. This was actually pretty easy to do with the exception of a little iOS quirk. Here’s a pen showing the basic wireframe. Notice how the textarea stretches to fill the unused space between the header and footer.

See the Pen
Full-page text editor with header + footer
by Cory LaViska (@claviska)
on CodePen.

If you look at the JavaScript tab, you’ll see the iOS quirk and the workaround. I’m not sure how to feature detect something like this, so for now it’s just a simple device check.

Handling settings

I wanted to keep the most commonly used settings easy to access, but also expose advanced settings for each mode. To do this, I made the settings button a popover with a link to more advanced settings inside. When a setting is changed, the UI updates immediately and the settings are persisted to localStorage.

The most common settings are contained in a small panel that provides quick access to them, while advanced settings are still accessible via a link in the panel.

I took advantage of Vue.js here. Each setting gets mapped to a data property, and when one of them changes, the UI updates (if required) and I call saveSettings(). It works something like this.

function saveSettings() {
  const settings = {};

  // settingsToStore is an array of property names that will be persisted
  // and "this" is referencing the current Vue model
  settingsToStore.map(key => settings[key] = this[key]);
  localStorage.setItem('settings', JSON.stringify(settings);
}

Every setting is a data property that gets synced to localStorage. This is a rather primitive way to store state, so I might update the app to use a state management library such as Vuex later on.

To restore settings, I have a restoreSettings() function that runs when the app starts up.

function restoreSettings() {
  const json = localStorage.getItem('settings');

  if (json) {
    try {
      const settings = JSON.parse(json);

      Object.keys(settings).forEach(key => {
        if (settingsToStore.includes(key)) {
          this[key] = settings[key];
        }
      });
    } catch (err) {
      window.alert('There was an error loading your previous settings');
    }
  }
}

The function fetches settings from localStorage, then applies them one by one ensuring only valid settings in settingsToStore get imported.

The Advanced Settings link opens a dialog with tabs for each mode. Despite having over 30 settings total, everything is organized and easy to access so users won’t feel overwhelmed.

Clicking the „Advanced Settings” link opens up language-specific preferences and shortcuts.

Applying themes

Dark mode is all the rage these days, so it’s enabled by default. There’s also a light theme for those who prefer it. The entire UI changes, except for popovers and dialogs.

I considered using prefers-color-scheme, which coincidentally landed in Firefox 67 recently, but I decided a toggle would probably be better. Browser support for the color theme preference query isn’t that great yet, plus developers are weird. (For example, I use macOS with the light theme, but my text editor is dark.)

The app with Light Mode UI enabled.

Defining features

Coming up with feature ideas is fairly easy. It’s limiting features for an initial release that’s hard. Here are the most relevant features I shipped right away:

  • Beautifies HTML, CSS, and JavaScript code
  • Syntax highlighting with tag/bracket matching
  • Paste or drop files to load code
  • Auto-detects indentation preference based on pasted code or dropped file
  • Light and dark themes
  • Clean and copy in one click
  • Keyboard shortcuts
  • Most JS Beautify options are configurable
  • Settings get stored indefinitely in localStorage
  • Minimal UI without ads (just an unobtrusive plug to my own service) 🙈

I also threw in a few easter eggs for fun. Try refreshing the page, exploring shortcuts, and sharing it on Facebook or Twitter to find them. 😉

The tools and libraries I used

I’m a big fan of Vue.js. It’s probably overkill for this project, but the Vue CLI let me start building with all the latest tooling via one simple command.

vue create beautify-code

I didn’t have to waste any time scaffolding, which helped me build this out quickly. Plus, Vue came in handy for things like live stats, changing themes, toggling settings, etc. I used various Element UI components for things like buttons, form elements, popovers, and dialogs.

The editor is powered by CodeMirror using custom styles. It’s a well-supported and fantastic project that I can’t recommend enough for in-browser code editing.

The library that does all the beautifying is called JS Beautify, which handles JavaScript, HTML, and CSS. JS Beautify runs on the client-side, so there’s really no backend to this app — your browser does all the work!

JS Beautify is incredibly easy to use. Install it with npm install js-beautify and run your code through the appropriate function.

import beautify from 'js-beautify';

const code = 'Your code here';
const settings = {
  // Your settings here
};

// HTML
const html = beautify.html(code, settings)

// CSS
const css = beautify.css(code, settings)

// JavaScript
const js = beautify.js(code, settings)

Each function returns a string containing the beautified code. You can change how each language is output by passing in your own settings.

I’ve been asked a few times about Prettier, which is a comparable tool, so it’s worth mentioning that I chose JS Beautify because it’s less opinionated and more configurable. If there’s enough demand, I’ll consider adding an option to toggle between JS Beautify and Prettier.

I’ve used all of these libraries before, so integration was actually pretty easy. 😅


This project was made possible by my app, Surreal CMS. If you’re looking for a great CMS for static websites, check it out — it’s free for personal, educational, and non-profit websites!

Oh, and if you’re wondering what editor I used… it’s Visual Studio Code. 👨🏻‍💻

The post How I Created a Code Beautifier in Two Days appeared first on CSS-Tricks.

What the Web Needs Now (and how ARTIFACT is here for it)

Post pobrano z: What the Web Needs Now (and how ARTIFACT is here for it)

I recently had the pleasure of joining Dave Rupert, Chris Coyier, and Chris Ferdinandi on the Shop Talk Show to talk about the upcoming ARTIFACT Conference (Austin, TX on Sept. 30 – Oct. 1, 2019). ARTIFACT is an intimate gathering of web designers and developers where we discuss ways to build web sites that work for everyone.

This isn’t our first rodeo! I started ARTIFACT back in 2013 with Christopher Schmitt and Ari Stiles (the team behind the legendary In Control and CSS Dev conferences). At that time, the sudden avalanche of web-enabled mobile devices was throwing the web design community for a loop. How do we best leverage the recently-introduced Responsive Design techniques to adapt our designs to a spectrum of screen sizes?! What does that do to our workflows?! What happens to our beloved Photoshop comps?! How do we educate our clients and structure our billing cycles?! It was an exciting time when we needed to adjust our processes quickly to take on a radically new web viewing environment.

After four events in 2013 and 2014, ARTIFACT took a little hiatus, but we are back for a five-year reunion in 2019. We are returning to a landscape where a lot of the challenges we faced in 2013 have been figured out or, at the very least, have settled down (although there is always room for innovation and improvement).

Is our work making the web better done? Not by a long shot! Now that we’ve got a handle on the low-bar requirement of getting something readable on all those screens, we can focus our energy on higher-order challenges. How do we make our sites work easier for people of all abilities? How do we make our content, products, and services welcoming to everyone? Does our code need to be so bloated and complicated? How can we make our sites simpler and faster? How can I put new tools like CSS Grid, Progressive Web Apps, static sites, and animation to good use?

To that end, this time around ARTIFACT is expanding its focus from “designing for all the devices” to “designing for all the people.” Simply put, we want a web that doesn’t leave anyone out, and we’ve curated our program to address inclusivity, performance, and the ways that new possibilities on the web affect our workflow.

A web for everyone

Inclusive design—including accessibility, diversity, and internationalization—has been bubbling to the top of the collective consciousness of the web-crafting community. I’m incredibly encouraged to see articles, conference talks, and podcasts devoted to improving the reach of the web. At ARTIFACT, inclusivity is a major theme that winds its way throughout our program.

Photo by Jopwell from Pexels
Benjamin Evans will talk about his efforts as the Inclusive Design Lead at AirBnB to create a user experience that does not alienate minority communities.
Accessibility expert Elle Waters will share best practices for integrating accessibility measures into our workflows..
We’ll also hear from David Dylan Thomas on how to recognize and address cognitive bias that can affect content and the overall user experience.

Even better performance

Visitors may also be turned away from our sites if pages take too long to load or use too much data. We know performance matters, yet sites on the whole grow more bloated with every passing year. Tim Kadlec (who knows more about performance than just about anybody) will examine the intersection of performance and inclusion in his talk “Outside Looking In” with lots of practical code examples for how to do better. We’ll also look at performance through the lens of Progressive Web Apps (presented by Jason Grigsby of Cloud Four). In fact, improving performance is a subtext to many of our developer-oriented talks.

Leveraging the modern browser

In the Good News Department, another big change since the first ARTIFACT is that browsers offer a lot more features out of the box, allowing us to leverage native browser behavior and simplify our code (Viva Performance!). Chris Ferdinandi will be demonstrating exactly that in his talk “Lean Web Development,” where he’ll point out ways that taking advantage of built-in browser functionality and writing JavaScript for just the interactivity you need may make a big framework unnecessary. Better native browser features also means un-learning some of our old coping mechanisms. We’ll get to delight at all the polyfills and workarounds that have been kicked to the curb since 2012 in Dave Rupert’s tale of “The Greatest Redesign Ever Told,” and we’ll see what best practices make sense going forward.

Workflow and process

One thing that hasn’t changed—and likely never will—is that never-ending hamster wheel of trying to keep up with an ever-changing web development landscape. I’m guessing that if you are reading CSS-Tricks right now, you know that feeling. The methods we use to build the web are always evolving with new tools, approaches, and possibilities, which is why best practices for adapting our workflows and processes have always been a central focus at ARTIFACT. This year is no different. Jen Simmons will share her thinking process for designing a CSS grid-based layout by live-coding a site before our very eyes. Design systems, which have become a cornerstone of large-scale site production, get the treatment in talks by Kim Williams, Dan Mall, and Brad Frost. (Dan and Brad are also running their acclaimed “Designer + Developer Collaboration Workflow” workshop on October 3.) Divya Sasidharan will show off the possibilities and performance advantages of static sites in her “JAMstackin” presentation, and we’ll get a glimpse of the future of web animation from Sarah Drasner. (She’s bringing her popular “Design for Developers” workshop on October 3 as well).

The web can always do better to serve the people who use it. We’re proud to provide an occasion for designers and developers who care about putting their users front and center to mingle and share ideas. And yes, there will be milkshakes! The very best milkshakes.


ARTIFACT takes place in Austin, TX from September 30 to October 2, 2019 with workshops on October 3. Group discounts are available.

The post What the Web Needs Now (and how ARTIFACT is here for it) appeared first on CSS-Tricks.

Weekly Platform News: CSS ::marker pseudo-element, pre-rendering web components, adding Webmention to your site

Post pobrano z: Weekly Platform News: CSS ::marker pseudo-element, pre-rendering web components, adding Webmention to your site

Šime posts regular content for web developers on webplatform.news.

In this week’s roundup: datepickers are giving keyboard users headaches, a new web component compiler that helps fight FOUC, we finally get our hands on styling list item markers, and four steps to getting webmentions on your site.

Using plain text fields for date input

Keyboard users prefer regular text fields over complex date pickers, and voice users are frustrated by the native control (<input type="date">).

Previously, I have relied on plain text inputs as date fields with custom validation for the site, typically using the same logic on the client and the server. For known dates — birthdays, holidays, anniversaries, etc. — it has tested well.

(via Adrian Roselli)

Pre-rendering web components

Stencil is a “web component compiler” that can be used to pre-render web components (including Shadow DOM) or hide them until they are fully styled to avoid the flash of unstyled content (FOUC).

This tool also makes sure that polyfills are only loaded when needed, and its Component API includes useful decorators and hooks that make writing web components easier (e.g., the Prop decorator handles changes to attributes).

import { Component, Prop, h } from "@stencil/core";

@Component({
  tag: "my-component"
})
export class MyComponent {
  @Prop() age: number = 0;

  render() {
    return <div>I am {this.age} years old</div>;
  }
}

(via Max Lynch)

The CSS ::marker pseudo-element

When the CSS display: list-item declaration is applied to an element, the element generates a marker box containing a marker, e.g., a list bullet (the <li> and <summary> elements have markers by default).

Markers can be styled via the ::marker pseudo-element (useful for changing the color or font of the marker), but this CSS feature is currently only supported in Firefox.

(via Rachel Andrew)

Adding Webmention to your website

  1. Sign up on Webmention.io; this is a service that collects webmentions on your behalf.
  2. Add <link rel="webmention"> (with the appropriate href value) to your web pages.

    There are also Webmention plugins available for all major content management systems (CMS) if you prefer building on top of your CMS.

  3. Fetch webmentions from Webmention.io (via Ajax) to display them on your page.
  4. Use webmention.app to automate sending webmentions (when you publish content that includes links to other sites that support Webmention).

(via Daniel Aleksandersen)

The post Weekly Platform News: CSS ::marker pseudo-element, pre-rendering web components, adding Webmention to your site appeared first on CSS-Tricks.

Weekly Platform News: HTML Inspection in Search Console, Global Scope of Scripts, Babel env Adds defaults Query

Post pobrano z: Weekly Platform News: HTML Inspection in Search Console, Global Scope of Scripts, Babel env Adds defaults Query

In this week’s look around the world of web platform news, Google Search Console makes it easier to view crawled markup, we learn that custom properties aren’t computing hogs, variables defined at the top-level in JavaScript are global to other page scripts, and Babel env now supports the defaults query — plus all of last month’s news compiled into a single package for you.

Easier HTML inspection in Google Search Console

The URL Inspection tool in Google Search Console now includes useful controls for searching within and copying the HTML code of the crawled page.

Note: The URL Inspection tool provides information about Google’s indexed version of a specific page. You can access Google Search Console at https://search.google.com/search-console.

(via Barry Schwartz)

CSS properties are computed once per element

The value of a CSS custom property is computed once per element. If you define a custom property --func on the <html> element that uses the value of another custom property --val, then re-defining the value of --val on a nested DOM element that uses --func won’t have any effect because the inherited value of --func is already computed.

html {
  --angle: 90deg;
  --gradient: linear-gradient(var(--angle), blue, red);
}

header {
  --angle: 270deg; /* ignored */
  background-image: var(--gradient); /* inherited value */
}

(via Miriam Suzanne)

The global scope of scripts

JavaScript variables created via let, const, or class declarations at the top level of a script (<script> element) continue to be defined in subsequent scripts included in the page.

Note:Axel Rauschmayer calls this the global scope of scripts.”)

(via Surma)

Babel env now supports the defaults query

Babel’s env preset (@babel/preset-env) now allows you to target browserslist’s default browsers (which are listed at browsersl.ist). Note that if you don’t specify your target browsers, Babel env will run every syntax transform on your code.

{
  "presets": [
    [
      "@babel/preset-env",
      {
        "targets": { "browsers": "defaults" }
      }
    ]
  ]
}

(via Nicolò Ribaudo)

All the June 2019 news that’s fit to… print

For your convenience, I have compiled all 59 news items that I’ve published throughout June into one 10-page PDF document.

Download PDF

The post Weekly Platform News: HTML Inspection in Search Console, Global Scope of Scripts, Babel env Adds defaults Query appeared first on CSS-Tricks.

Protecting Vue Routes with Navigation Guards

Post pobrano z: Protecting Vue Routes with Navigation Guards

Authentication is a necessary part of every web application. It is a handy means by which we can personalize experiences and load content specific to a user — like a logged in state. It can also be used to evaluate permissions, and prevent otherwise private information from being accessed by unauthorized users.

A common practice that applications use to protect content is to house them under specific routes and build redirect rules that navigate users toward or away from a resource depending on their permissions. To gate content reliably behind protected routes, they need to build to separate static pages. This way, redirect rules can properly handle redirects.

In the case of Single Page Applications (SPAs) built with modern front-end frameworks, like Vue, redirect rules cannot be utilized to protect routes. Because all pages are served from a single entry file, from a browser’s perspective, there is only one page: index.html. In a SPA, route logic generally stems from a routes file. This is where we will do most of our auth configuration for this post. We will specifically lean on Vue’s navigation guards to handle authentication specific routing since this helps us access selected routes before it fully resolves. Let’s dig in to see how this works.

Roots and Routes

Navigation guards are a specific feature within Vue Router that provide additional functionality pertaining to how routes get resolved. They are primarily used to handle error states and navigate a user seamlessly without abruptly interrupting their workflow.

There are three main categories of guards in Vue Router: Global Guards, Per Route Guards and In Component Guards. As the names suggest, Global Guards are called when any navigation is triggered (i.e. when URLs change), Per Route Guards are called when the associated route is called (i.e. when a URL matches a specific route), and Component Guards are called when a component in a route is created, updated or destroyed. Within each category, there are additional methods that gives you more fine grained control of application routes. Here’s a quick break down of all available methods within each type of navigation guard in Vue Router.

Global Guards

  • beforeEach: action before entering any route (no access to this scope)
  • beforeResolve: action before the navigation is confirmed, but after in-component guards (same as beforeEach with this scope access)
  • afterEach: action after the route resolves (cannot affect navigation)

Per Route Guards

  • beforeEnter: action before entering a specific route (unlike global guards, this has access to this)

Component Guards

  • beforeRouteEnter: action before navigation is confirmed, and before component creation (no access to this)
  • beforeRouteUpdate: action after a new route has been called that uses the same component
  • beforeRouteLeave: action before leaving a route

Protecting Routes

To implement them effectively, it helps to know when to use them in any given scenario. If you wanted to track page views for analytics for instance, you may want to use the global afterEach guard, since it gets fired when the route and associated components are fully resolved. And if you wanted to prefetch data to load onto a Vuex store before a route resolves, you could do so using the beforeEnter per route guard.

Since our example deals with protecting specific routes based on a user’s access permissions, we will use in component navigation guards, namely the beforeEnter hook. This navigation guard gives us access to the proper route before the resolve completes; meaning that we can fetch data or check that data has loaded before letting a user pass through. Before diving into the implementation details of how this works, let’s briefly look at how our beforeEnter hook fits into our existing routes file. Below, we have our sample routes file, which has our protected route, aptly named protected. To this, we will add our beforeEnter hook to it like so:

const router = new VueRouter({
  routes: [
    ...
    {
      path: "/protected",
      name: "protected",
      component: import(/* webpackChunkName: "protected" */ './Protected.vue'),
      beforeEnter(to, from, next) {
        // logic here
      }
  ]
})

Anatomy of a route

The anatomy of a beforeEnter is not much different from other available navigation guards in Vue Router. It accepts three parameters: to, the “future” route the app is navigating to; from, the “current/soon past” route the app is navigating away from and next, a function that must be called for the route to resolve successfully.

Generally, when using Vue Router, next is called without any arguments. However, this assumes a perpetual success state. In our case, we want to ensure that unauthorized users who fail to enter a protected resource have an alternate path to take that redirects them appropriately. To do this, we will pass in an argument to next. For this, we will use the name of the route to navigate users to if they are unauthorized like so:

next({
  name: "dashboard"
})

Let’s assume in our case, that we have a Vuex store where we store a user’s authorization token. In order to check that a user has permission, we will check this store and either fail or pass the route appropriately.

beforeEnter(to, from, next) {
  // check vuex store //
  if (store.getters["auth/hasPermission"]) {
    next()
  } else {
    next({
      name: "dashboard" // back to safety route //
    });
  }
}

In order to ensure that events happen in sync and that the route doesn’t prematurely load before the Vuex action is completed, let’s convert our navigation guards to use async/await.

async beforeEnter(to, from, next) {
  try {
    var hasPermission = await store.dispatch("auth/hasPermission");
    if (hasPermission) {
      next()
    }
  } catch (e) {
    next({
      name: "dashboard" // back to safety route //
    })
  }
} 

Never forget where you came from

So far our navigation guard fulfills its purpose of preventing unauthorized users access to protected resources by redirecting them to where they may have come from (i.e. the dashboard page). Even so, such a workflow is disruptive. Since the redirect is unexpected, a user may assume user error and attempt to access the route repeatedly with the eventual assumption that the application is broken. To account for this, let’s create a way to let users know when and why they are being redirected.

We can do this by passing in a query parameter to the next function. This allows us to append the protected resource path to the redirect URL. So, if you want to prompt a user to log into an application or obtain the proper permissions without having to remember where they left off, you can do so. We can get access to the path of the protected resource via the to route object that is passed into the beforeEnter function like so: to.fullPath.

async beforeEnter(to, from, next) {
  try {
    var hasPermission = await store.dispatch("auth/hasPermission");
    if (hasPermission) {
      next()
    }
  } catch (e) {
    next({
      name: "login", // back to safety route //
      query: { redirectFrom: to.fullPath }
    })
  }
}

Notifying

The next step in enhancing the workflow of a user failing to access a protected route is to send them a message letting them know of the error and how they can solve the issue (either by logging in or obtaining the proper permissions). For this, we can make use of in component guards, specifically, beforeRouteEnter, to check whether or not a redirect has happened. Because we passed in the redirect path as a query parameter in our routes file, we now can check the route object to see if a redirect happened.

beforeRouteEnter(to, from, next) {
  if (to.query.redirectFrom) {
    // do something //
  }
}

As I mentioned earlier, all navigation guards must call next in order for a route to resolve. The upside to the next function as we saw earlier is that we can pass an object to it. What you may not have known is that you can also access the Vue instance within the next function. Wuuuuuuut? Here’s what that looks like:

next(() => {
  console.log(this) // this is the Vue instance
})

You may have noticed that you don’t technically have access to the this scope when using beforeEnter. Though this might be the case, you can still access the Vue instance by passing in the vm to the function like so:

next(vm => {
  console.log(vm) // this is the Vue instance
})

This is especially handy because you can now create and appropriately update a data property with the relevant error message when a route redirect happens. Say you have a data property called errorMsg. You can now update this property from the next function within your navigation guards easily and without any added configuration. Using this, you would end up with a component like this:

<template>
  <div>
    <span>{{ errorMsg }}</span>
    <!-- some other fun content -->
    ...
    <!-- some other fun content -->
  </div>
</template>
<script>
export default {
  name: "Error",
  data() {
    return {
      errorMsg: null
    }
  },
  beforeRouteEnter(to, from, next) {
    if (to.query.redirectFrom) {
      next(vm => {
        vm.errorMsg =
          "Sorry, you don't have the right access to reach the route requested"
      })
    } else {
      next()
    }
  }
}
</script>

Conclusion

The process of integrating authentication into an application can be a tricky one. We covered how to gate a route from unauthorized access as well as how to put workflows in place that redirect users toward and away from a protected resource based on their permissions. The assumption thus far has been that you already have authentication configured in your application. If you don’t yet have this configured and you’d like to get up and running fast, I highly recommend working with authentication as a service. There are providers like Netlify’s Identity Widget or Auth0’s lock.

The post Protecting Vue Routes with Navigation Guards appeared first on CSS-Tricks.

Position Sticky and Table Headers

Post pobrano z: Position Sticky and Table Headers

You can’t position: sticky; a <thead>. Nor a <tr>. But you can sticky a <th>, which means you can make sticky headers inside a regular ol’ <table>. This is tricky stuff, because if you didn’t know this weird quirk, it would be hard to blame you. It makes way more sense to sticky a parent element like the table header rather than each individiaul element in a row.

The issue boils down to the fact that stickiness requires position: relative to work and that doesn’t apply to <thead> and <tr> in the CSS 2.1 spec.

There are two very extreme reactions to this, should you need to implement sticky table headers and not be aware of the <th> workaround.

  • Don’t use table markup at all. Instead, use different elements (<div>s and whatnot) and other CSS layout methods to replicate the style of a table, but not locked out of using position: relative and creating position: sticky parent elements.
  • Use table elements, but totally remove all their styling defaults with new display values.

The first is dangerous because you aren’t using semantic and accessible elements for the content to be read and navigated. The second is almost the same. You can go that route, but need to be really careful to re-apply semantic roles.

Anyway, none of that matters if you just stick (get it?!) to using a sticky value on those <th> elements.

See the Pen
Sticky Table Headers with CSS
by Chris Coyier (@chriscoyier)
on CodePen.

It’s probably a bit weird to have table headers as a row in the middle of a table, but it’s just illustrating the idea. I was imagining colored header bars separating players on different sports teams or something.

Anytime I think about data tables, I also think about how tricky it can be to make them responsive. Fortunately, there are a variety of ways, all depending on the best way to group and explore the data in them.

The post Position Sticky and Table Headers appeared first on CSS-Tricks.

Color Inputs: A Deep Dive into Cross-Browser Differences

Post pobrano z: Color Inputs: A Deep Dive into Cross-Browser Differences

In this article, we’ll be taking a look at the structure inside <input type='color'> elements, browser inconsistencies, why they look a certain way in a certain browser, and how to dig into it. Having a good understanding of this input allows us to evaluate whether a certain cross-browser look can be achieved and how to do so with a minimum amount of effort and code.

Here’s exactly what we’re talking about:

But before we dive into this, we need to get into…

Accessibility issues!

We’ve got a huge problem here: for those who completely rely on a keyboard, this input doesn’t work as it should in Safari and in Firefox on Windows, but it does work in Firefox on Mac and Linux (which I only tested on Fedora, so feel free to yell at me in the comments if it doesn’t work for you using another distribution).

In Firefox on Windows, we can Tab to the input to focus it, press Enter to bring up a dialog… which we then cannot navigate with the keyboard!

I’ve tried tabbing, arrow keys, and every other key available on the keyboard… nothing! I could at least close the dialog with good old Alt + F4. Later, in the bug ticket I found for this on Bugzilla, I also discovered a workaround: Alt + Tab to another window, then Alt + Tab back and the picker dialog can be navigated with the keyboard.

Things are even worse in Safari. The input isn’t even focusable (bug ticket) if VoiceOver isn’t on. And even when using VoiceOver, tabbing through the dialog the inputs opens is impossible.

If you’d like to use <input type='color'> on an actual website, please let browsers know this is something that needs to be solved!

How to look inside

In Chrome, we need to bring up DevTools, go to Settings and, in the Preferences section under Elements, check the Show user agent shadow DOM option.

How to view the structure inside an input in Chrome.

Then, when we return to inspect our element, we can see inside its shadow DOM.

In Firefox, we need to go to about:config and ensure the devtools.inspector.showAllAnonymousContent flag is set to true.

How to view the structure inside an input in Firefox.

Then, we close the DevTools and, when we inspect our input again, we can see inside our input.

Sadly, we don’t seem to have an option for this in pre-Chromium Edge.

The structure inside

The structure revealed in DevTools differs from browser to browser, just like it does for range inputs.

In Chrome, at the top of the shadow DOM, we have a <div> wrapper that we can access using ::-webkit-color-swatch-wrapper.

Inside it, we have another <div> we can access with ::-webkit-color-swatch.

Screenshot of Chrome DevTools showing the shadow DOM of the . Right at the top, we have a div which is the swatch wrapper and can be accessed using ::-webkit-color-swatch-wrapper. Inside it, there's another div which is the swatch and can be accessed using ::-webkit-color-swatch. This div has the background-color set to the value of the parent color input.
Inner structure in Chrome.

In Firefox, we only see one <div>, but it’s not labeled in any way, so how do we access it?

On a hunch, given this <div> has the background-color set to the input’s value attribute, just like the ::-webkit-color-swatch component, I tried ::-moz-color-swatch. And it turns out it works!

Screenshot of Firefox DevTools showing what's inside an . Unlike in Chrome, here we only have a div which is the swatch and can be accessed using ::-moz-color-swatch. This div has the background-color set to the value of the parent color input.
Inner structure in Firefox.

However, I later learned we have a better way of figuring this out for Firefox!

We can go into the Firefox DevTools Settings and, in the Inspector section, make sure the „Show Browser Styles” option is checked. Then, we go back to the Inspector and select this <div> inside our <input type='color'>. Among the user agent styles, we see a rule set for input[type='color']::-moz-color-swatch!

Animated gif. Shows how to enable viewing applied user agent styles in Firefox DevTools: Settings > Inspector > check the Show Browser styles checkbox.
Enable viewing browser styles in Firefox DevTools.

In pre-Chromium Edge, we cannot even see what kind of structure we have inside. I gave ::-ms-color-swatch a try, but it didn’t work and neither did ::-ms-swatch (which I considered because, for an input type='range', we have ::-webkit-slider-thumb and ::-moz-range thumb, but just ::-ms-thumb).

After a lot of searching, all I found was this issue from 2016. Pre-Chromium Edge apparently doesn’t allow us to style whatever is inside this input. Well, that’s a bummer.

How to look at the browser styles

In all browsers, we have the option of not applying any styles of our own and then looking at the computed styles.

In Chrome and Firefox, we can also see the user agent stylesheet rule sets that are affecting the currently selected element (though we need to explicitly enable this in Firefox, as seen in the previous section).

Screenshot collage of Chrome DevTools and Firefox DevTools showing where to look for user agent styles: Elements > Styles in Chrome and Inspector > Styles in Firefox.
Checking browser styles in Chrome and Firefox.

This is oftentimes more helpful than the computed styles, but there are exceptions and we should still always check the computed values as well.

In Firefox, we can also see the CSS file for the form elements at view-source:resource://gre-resources/forms.css.

Screenshot showing view-source:resource://gre-resources/forms.css open in Firefox to allow us seeing user agent styles for form elements.
Checking browser styles in Firefox.

The input element itself

We’ll now be taking a look at the default values of a few properties in various browsers in order to get a clear picture of what we’d really need to set explicitly in order to get a custom cross-browser result.

The first property I always think about checking when it comes to <input> elements is box-sizing. The initial value of this property is border-box in Firefox, but content-box in Chrome and Edge.

Comparative screenshots of DevTools in the three browsers showing the computed values of box-sizing for the actual input.
The box-sizing values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

We can see that Firefox is setting it to border-box on <input type='color'>, but it looks like Chrome isn’t setting it at all, so it’s left with the initial value of content-box (and I suspect the same is true for Edge).

In any event, what it all means is that, if we are to have a border or a padding on this element, we also need to explicitly set box-sizing so that we get a consistent result across all these browsers.

The font property value is different for every browser, but since we don’t have text inside this input, all we really care about is the font-size, which is consistent across all browsers I’ve checked: 13.33(33)px. This is a value that really looks like it came from dividing 40px by 3, at least in Chrome.

Comparative screenshots of DevTools in the three browsers showing the font values for the actual input.
The font values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

This is a situation where the computed styles are more useful for Firefox, because if we look at the browser styles, we don’t get much in terms of useful information:

Screenshot of what we get if we look at the browser styles where the font was set for Firefox. The value for the font is -moz-field, which is an alias for the look of a native text field. Expanding this to check the longhands shows us empty instead of actual values.
Sometimes the browser styles are pretty much useless (Firefox screenshot).

The margin is also consistent across all these browsers, computing to 0.

Comparative screenshots of DevTools in the three browsers showing the margin values for the actual input.
The margin values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

The border is different for every single browser. In both Chrome and Edge, we have a solid 1px one, but the border-color is different (rgb(169, 169, 169) for Chrome and rgb(112, 112, 112) for Edge). In Firefox, the border is an outset 2px one, with a border-color of… ThreeDLightShadow?!

Comparative screenshots of DevTools in the three browsers showing the border values for the actual input.
The border values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

What’s the deal with ThreeDLightShadow? If it doesn’t sound familiar, don’t worry! It’s a (now deprecated) CSS2 system value, which Firefox on Windows shows me to be rgb(227, 227, 227) in the Computed styles tab.

Screenshot of Computed panel search in Firefox on Windows, showing that the ThreeDLightShadow keyword computes to rgb(227, 227, 227).
Computed border-color for <input type='color'> in Firefox on Windows.

Note that in Firefox (at least on Windows), the operating system zoom level (SettingsSystemDisplayScale and LayoutChange the size of text, apps and other items) is going to influence the computed value of the border-width, even though this doesn’t seem to happen for any other property I’ve checked and it seems to be partially related to the border-style.

Screenshot showing the Windows display settings window with the zoom level options dropdown opened.
Zoom level options on Windows.

The strangest thing is the computed border-width values for various zoom levels don’t seem to make any sense. If we keep the initial border-style: outset, we have:

  • 1.6px for 125%
  • 2px for 150%
  • 1.7px for 175%
  • 1.5px for 200%
  • 1.8px for 225%
  • 1.6px for 250%
  • 1.66667px for 300%

If we set border-style: solid, we have a computed border-width of 2px, exactly as it was set, for zoom values that are multiples of 50% and the exact same computed values as for border-style: outset for all the other zoom levels.

The padding is the same for Chrome and Edge (1px 2px), while Firefox is the odd one out again.

Comparative screenshots of DevTools in the three browsers showing the padding values for the actual input.
The padding values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

It may look like the Firefox padding is 1px. That’s what it is set to and there’s no indication of anything overriding it — if a property is overridden, then it’s shown as grey and with a strike-through.

Screenshot of Firefox DevTools highlighting how the border set on input[type='color'] overrides the one set on input and the look (grey + strike-through) of overridden properties.
Spotting overrides in Firefox.

But the computed value is actually 0 8px! Moreover, this is a value that doesn’t depend on the operating system zoom level. So, what the hairy heck is going on?!

Screenshot of Firefox DevTools showing how the computed padding value on  isn't the one that was set on input, even if no override seems to be happening.
Computed value for padding in Firefox doesn’t match the value that was set on input.

Now, if you’ve actually tried inspecting a color input, took a close look at the styles set on it, and your brain works differently than mine (meaning you do read what’s in front of you and don’t just scan for the one thing that interests you, completely ignoring everything else…) then you’ve probably noticed there is something overriding the 1px padding (and should be marked as such) — the flow-relative padding!

Screenshot of Firefox DevTools showing the flow-relative padding overriding the old padding due to higher specificity of selector (input[type='color'] vs. input).
Flow-relative padding overrides in Firefox.

Dang, who knew those properties with lots of letters were actually relevant? Thanks to Zoltan for noticing and letting me know. Otherwise, it probably would have taken me two more days to figure this one out.

This raises the question of whether the same kind of override couldn’t happen in other browsers and/or for other properties.

Edge doesn’t support CSS logical properties, so the answer is a „no” in that corner.

In Chrome, none of the logical properties for margin, border or padding are set explicitly for <input type='color'>, so we have no override.

Concerning other properties in Firefox, we could have found ourselves in the same situation for margin or for border, but with these two, it just so happens the flow-relative properties haven’t been explicitly set for our input, so again, there’s no override.

Even so, it’s definitely something to watch out for in the future!

Moving on to dimensions, our input’s width is 44px in Chrome and Edge and 64px in Firefox.

Comparative screenshots of DevTools in the three browsers showing the width values for the actual input.
The width values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

Its height is 23px in all three browsers.

Comparative screenshots of DevTools in the three browsers showing the height values for the actual input.
The height values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

Note that, since Chrome and Edge have a box-sizing of content-box, their width and height values do not include the padding or border. However, since Firefox has box-sizing set to border-box, its dimensions include the padding and border.

Comparative screenshots of DevTools in the three browsers showing the layout boxes.
The layout boxes for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

This means the content-box is 44pxx23px in Chrome and Edge and 44xpxx19px in Firefox, the padding-box is 48pxx25 in Chrome and Edge and 60pxx19px in Firefox and the border-box is 50pxx27px in Chrome and Edge and 64pxx23 in Firefox.

We can clearly see how the dimensions were set in Chrome and I’d assume they were set in the same direct way in Edge as well, even if Edge doesn’t allow us to trace this stuff. Firefox doesn’t show these dimensions as having been explicitly set and doesn’t even allow us to trace where they came from in the Computed tab (as it does for other properties like border, for example). But if we look at all the styles that have been set on input[type='color'], we discover the dimensions have been set as flow-relative ones (inline-size and block-size).

Screenshot of the Firefox user agent styles showing flow relative dimensions being set on input[type='color'].
How <input type='color'> dimensions have been set in Firefox.

The final property we check for the normal state of the actual input is background. Here, Edge is the only browser to have a background-image (set to a top to bottom gradient), while Chrome and Firefox both have a background-color set to ButtonFace (another deprecated CSS2 system value). The strange thing is this should be rgb(240, 240, 240) (according to this resource), but its computed value in Chrome is rgb(221, 221, 221).

Comparative screenshots of DevTools in the three browsers showing the background values for the actual input.
The background values for <input type='color'> compared in Chrome, Firefox and Edge (from top-to-bottom).

What’s even stranger is that, if we actually look at our input in Chrome, it sure does look like it has a gradient background! If we screenshot it and then use a picker, we get that it has a top to bottom gradient from #f8f8f8 to #ddd.

Screenshot of the input in Chrome. A very light, almost white, grey to another light, but still darker grey gradient from top to bottom can be seen as the background, not the solid background-color indicated by DevTools.
What the actual input looks like in Chrome. It appears to have a gradient, in spite of the info we get from DevTools telling us it doesn’t.

Also, note that changing just the background-color (or another property not related to dimensions like border-radius) in Edge also changes the background-image, background-origin, border-color or border-style.

Animated gif. Shows the background-image, background-origin, border-color, border-style before and after changing the seemingly unrelated background-color - their values don't get preserved following this change.
Edge: side-effects of changing background-color.

Other states

We can take a look at the styles applied for a bunch of other states of an element by clicking the :hov button in the Styles panel for Chrome and Firefox and the a: button in the same Styles panel for Edge. This reveals a section where we can check the desired state(s).

Screenshot collage highlighting the buttons that bring up the states panel in Chrome, Firefox and Edge.
Taking a look at other states in Chrome, Firefox, Edge (from top to bottom).

Note that, in Firefox, checking a class only visually applies the user styles on the selected element, not the browser styles. So, if we check :hover for example, we won’t see the :hover styles applied on our element. We can however see the user agent styles matching the selected state for our selected element shown in DevTools.

Also, we cannot test for all states like this and let’s start with such a state.

:disabled

In order to see how styles change in this state, we need to manually add the disabled attribute to our <input type='color'> element.

Hmm… not much changes in any browser!

In Chrome, we see the background-color is slightly different (rgb(235, 235, 228) in the :disabled state versus rgb(221, 221, 221) in the normal state).

Chrome DevTools screenshot showing the background being set to rgb(235, 235, 228) for a :disabled input.
Chrome :disabled styling.

But the difference is only clear looking at the info in DevTools. Visually, I can tell tell there’s a slight difference between an input that’s :disabled and one that’s not if they’re side-by-side, but if I didn’t know beforehand, I couldn’t tell which is which just by looking at them, and if I just saw one, I couldn’t tell whether it’s enabled or not without clicking it.

Disabled and enabled input side by side in Chrome. There is a slight difference in background-color, but it's pretty much impossible to tell which is which just by looking at them.
Disabled (left) versus enabled (right) <input type='color'> in Chrome.

In Firefox, we have the exact same values set for the :disabled state as for the normal state (well, except for the cursor, which realistically, isn’t going to produce different results save for exceptional cases anyway). What gives, Firefox?!

Comparison of styles set in Firefox for  in its normal state and its :disabled state. The padding and border set in the :disabled case are exactly the same as those set in the normal case.
Firefox :disabled (top) versus normal (bottom) styling.

In Edge, both the border-color and the background gradient are different.

Chrome DevTools screenshot showing border-color and the background-image being set to slightly different values for a :disabled input.
Edge :disabled styling (by checking computed styles).

We have the following styles for the normal state:

border-color: rgb(112, 112, 112);
background-image: linear-gradient(rgb(236, 236, 236), rgb(213, 213, 213));

And for the :disabled state:

border-color: rgb(186, 186, 186);
background-image: linear-gradient(rgb(237, 237, 237), rgb(229, 229, 229));

Clearly different if we look at the code and visually better than Chrome, though it still may not be quite enough:

Disabled and enabled input side by side in Edge. There is a slight difference in background-image and a bigger difference in border-color, but it still may be difficult to tell whether an input is enabled or not at first sight without having a reference to compare.
Disabled (left) versus enabled (right) <input type='color'> in Edge.
:focus

This is one state we can test by toggling the DevTools pseudo-classes. Well, in theory. In practice, it doesn’t really help us in all browsers.

Starting with Chrome, we can see that we have an outline in this state and the outline-color computes to rgb(77, 144, 254), which is some kind of blue.

Chrome DevTools screenshot showing an outline for an input having <code>:focus</code>.
Chrome :focus styling.

Pretty straightforward and easy to spot.

Moving on to Firefox, things start to get hairy! Unlike Chrome, toggling the :focus pseudo-class from DevTools does nothing on the input element, though by focusing it (by tab click), the border becomes blue and we get a dotted rectangle within — but there’s no indication in DevTools regarding what is happening.

Animated gif. Shows how, on :focus, our input gets a blue border and a dark inner dotted rectangle.
What happens in Firefox when tabbing to our input to :focus it.

If we check Firefox’s forms.css, it provides an explanation for the dotted rectangle. This is the dotted border of a pseudo-element, ::-moz-focus-inner (a pseudo-element which, for some reason, isn’t shown in DevTools inside our input as ::-moz-color-swatch is). This border is initially transparent and then becomes visible when the input is focused — the pseudo-class used here (:-moz-focusring) is pretty much an old Firefox version of the new standard (:focus-visible), which is currently only supported by Chrome behind the Experimental Web Platform features flag.

Firefox DevTools screenshot where the inner dotted rectangle on :focus comes from: it is set as a transparent border on the ::-moz-focus-inner pseudo-element and it becomes visible when the input should have a noticeable :focus indicator.
Firefox: where the inner dotted rectangle on :focus comes from.

What about the blue border? Well, it appears this one isn’t set by a stylesheet, but at an OS level instead. The good news is we can override all these styles should we choose to do so.

In Edge, we’re faced with a similar situation. Nothing happens when toggling the :focus pseudo-class from DevTools, but if we actually tab to our input to focus it, we can see an inner dotted rectangle.

Animated gif. Shows how, on :focus, our input gets an inner dotted rectangle.
What happens in Edge when tabbing to our input to :focus it.

Even though I have no way of knowing for sure, I suspect that, just like in Firefox, this inner rectangle is due to a pseudo-element that becomes visible on :focus.

:hover

In Chrome, toggling this pseudo-class doesn’t reveal any :hover-specific styles in DevTools. Furthermore, actually hovering the input doesn’t appear to change anything visually. So it looks like Chrome really doesn’t have any :hover-specific styles?

In Firefox, toggling the :hover pseudo-class from DevTools reveals a new rule in the styles panel:

Screenshot of Firefox DevTools showing the rule set that shows up for the :hover state.
Firefox :hover styling as seen in DevTools.

When actually hovering the input, we see the background turns light blue and the border blue, so the first thought would be that light blue is the -moz-buttonhoverface value and that the blue border is again set at an OS level, just like in the :focus case.

Animated gif. Shows that, on actually hovering our , it gets a light blue background and a blue border.
What actually happens in Firefox on :hover.

However, if we look at the computed styles, we see the same background we have in the normal state, so that blue background is probably really set at an OS level as well, in spite of having that rule in the forms.css stylesheet.

Screenshot of Firefox DevTools showing the computed value for background-color in the :hover state.
Firefox: computed background-color of an <input type='color'> on :hover.

In Edge, toggling the :hover pseudo-class from DevTools gives our input a light blue (rgb(166, 244, 255)) background and a blue (rgb(38, 160, 218)) border, whose exact values we can find in the Computed tab:

Screenshot of Edge DevTools showing the computed value for background-color and border-color in the :hover state.
Edge: computed background-color and border-color of an <input type='color'> on :hover.
:active

Checking the :active state in the Chrome DevTools does nothing visually and shows no specific rules in the Styles panel. However, if we actually click our input, we see that the background gradient that doesn’t even show up in DevTools in the normal state gets reversed.

Screenshot of the input in :active state in Chrome. A very light, almost white, grey to another light, but still darker grey gradient from bottom to top can be seen as the background, not the solid background-color indicated by DevTools.
What the actual input looks like in Chrome in the :active state. It appears to have a gradient (reversed from the normal state), in spite of the info we get from DevTools telling us it doesn’t.

In Firefox DevTools, toggling the :active state on does nothing, but if we also toggle the :hover state on, then we get a rule set that changes the inline padding (the block padding is set to the same value of 0 it has in all other states), the border-style and sets the background-color back to our old friend ButtonFace.

Screenshot of Firefox DevTools showing the rule set that shows up for the :active state.
Firefox :active styling as seen in DevTools.

In practice, however, the only thing that matches the info we get from DevTools is the inline shift given by the change in logical padding. The background becomes a lighter blue than the :hover state and the border is blue. Both of these changes are probably happening at an OS level as well.

Animated gif. Shows that, on actually clicking our , it gets a light blue background and a blue border in addition to sliding 1 pixel in the inline direction as a result of changing the inline padding.
What actually happens in Firefox in an :active state.

In Edge, activating the :active class from DevTools gives us the exact same styles we have for the :hover state. However, if we have both the :hover and the :active states on, things change a bit. We still have a light blue background and a blue border, but both are darker now (rgb(52, 180, 227) for the background-color and rgb(0, 137, 180) for the border-color):

Screenshot of Edge DevTools showing the computed value for background-color and border-color in the :active state.
The computed background-color and border-color of an <input type='color'> on :active viewed in Edge.

This is the takeaway: if we want a consistent cross-browser results for <input type='color'>, we should define our own clearly distinguishable styles for all these states ourselves because, fortunately, almost all the browser defaults — except for the inner rectangle we get in Edge on :focus — can be overridden.

The swatch wrapper

This is a component we only see in Chrome, so if we want a cross-browser result, we should probably ensure it doesn’t affect the swatch inside — this means ensuring it has no margin, border, padding or background and that its dimensions equal those of the actual input’s content-box.

In order to know whether we need to mess with these properties (and maybe others as a result) or not, let’s see what the browser defaults are for them.

Fortunately, we have no margin or border, so we don’t need to worry about these.

Chrome DevTools screenshot showing the margin and border values for the swatch wrapper.
The margin and border values for the swatch wrapper in Chrome.

We do however have a non-zero padding (of 4px 2px), so this is something we’ll need to zero out if we want to achieve a consistent cross-browser result.

Chrome DevTools screenshot showing the padding values for the swatch wrapper.
The padding values for the swatch wrapper in Chrome.

The dimensions are both conveniently set to 100%, which means we won’t need to mess with them.

Chrome DevTools screenshot showing the size values for the swatch wrapper.
The size values for the swatch wrapper in Chrome.

Something we need to note here is that we have box-sizing set to border-box, so the padding gets subtracted from the dimensions set on this wrapper.

Chrome DevTools screenshot showing the <code>box-sizing</code> value for the swatch wrapper.
The box-sizing value for the swatch wrapper in Chrome.

This means that while the padding-box, border-box and margin-box of our wrapper (all equal because we have no margin or border) are identical to the content-box of the actual <input type='color'> (which is 44pxx23px in Chrome), getting the wrapper’s content-box involves subtracting the padding from these dimensions. It results that this box is 40pxx15px.

Chrome DevTools screenshot showing the box model for the swatch wrapper.
The box model for the swatch wrapper in Chrome.

The background is set to transparent, so that’s another property we don’t need to worry about resetting.

Chrome DevTools screenshot showing the background values for the swatch wrapper.
The background values for the swatch wrapper in Chrome.

There’s one more property set on this element that caught my attention: display. It has a value of flex, which means its children are flex items.

Chrome DevTools screenshot showing the display value for the swatch wrapper.
The display value for the swatch wrapper in Chrome.

The swatch

This is a component we can style in Chrome and Firefox. Sadly, Edge doesn’t expose it to allow us to style it, so we cannot change properties we might want to, such as border, border-radius or box-shadow.

The box-sizing property is one we need to set explicitly if we plan on giving the swatch a border or a padding because its value is content-box in Chrome, but border-box in Firefox.

Comparative screenshots of DevTools in the two browsers showing the computed values of box-sizing for the swatch component.
The box-sizing values for the swatch viewed in Chrome (top) and Firefox (bottom).

Fortunately, the font-size is inherited from the input itself so it’s the same.

Comparative screenshots of DevTools in the two browsers showing the computed values of font-size for the swatch component.
The font-size values for the swatch viewed in Chrome (top) and Firefox (bottom).

The margin computes to 0 in both Chrome and Firefox.

Comparative screenshots of DevTools in the two browsers showing the computed values of margin for the swatch component.
The margin values for the swatch viewed in Chrome (top) and Firefox (bottom).

This is because most margins haven’t been set, so they end up being 0 which is the default for <div> elements. However, Firefox is setting the inline margins to auto and we’ll be getting to why that computes to 0 in just a little moment.

Screenshot of Firefox DevTools.
The inline margin for the swatch being set to auto in Firefox.

The border is solid 1px in both browsers. The only thing that differs is the border-color, which is rgb(119, 119, 119) in Chrome and grey (or rgb(128, 128, 128), so slightly lighter) in Firefox.

Comparative screenshots of DevTools in the two browsers showing the computed values of border for the swatch component.
The border values for the swatch viewed in Chrome (top) and Firefox (bottom).

Note that the computed border-width in Firefox (at least on Windows) depends on the OS zoom level, just as it is in the case of the actual input.

The padding is luckily 0 in both Chrome and Firefox.

Comparative screenshots of DevTools in the two browsers showing the computed values of padding for the swatch component.
The padding values for the swatch viewed in Chrome (top) and Firefox (bottom).

The dimensions end up being exactly what we’d expect to find, assuming the swatch covers its parent’s entire content-box.

Comparative screenshots of DevTools in the two browsers showing the box model for the swatch component.
The box model for the swatch viewed in Chrome (top) and Firefox (bottom).

In Chrome, the swatch parent is the <div> wrapper we saw earlier, whose content-box is 4pxx15px. This is equal to the margin-box and the border-box of the swatch (which coincide as we have no margin). Since the padding is 0, the content-box and the padding-box for the swatch are identical and, subtracting the 1px border, we get dimensions that are 38pxx13px.

In Firefox, the swatch parent is the actual input, whose content-box is 44pxx19px one. This is equal to the margin-box and the border-box of the swatch (which coincide as we have no margin). Since the padding is 0, the content-box and the padding-box for the swatch are identical and, subtracting the 1px border, we get that their dimensions are 42pxx17px.

In Firefox, we see that the swatch is made to cover its parent’s content-box by having both its dimensions set to 100%.

Comparative screenshots of DevTools in the two browsers showing the size values for the swatch component.
The size values for the swatch viewed in Chrome (top) and Firefox (bottom).

This is the reason why the auto value for the inline margin computes to 0.

But what about Chrome? We cannot see any actual dimensions being set. Well, this result is due to the flex layout and the fact that the swatch is a flex item that’s made to stretch such that it covers its parent’s content-box.

Chrome DevTools screenshot showing the flex value for the swatch wrapper.
The flex value for the swatch wrapper in Chrome.

Final thoughts

Phew, we covered a lot of ground here! While it may seem exhaustive to dig this deep into one specific element, this is the sort of exercise that illustrates how difficult cross-browser support can be. We have our own styles, user agent styles and operating system styles to traverse and some of those are always going to be what they are. But, as we discussed at the very top, this winds up being an accessibility issue at the end of the day, and something to really consider when it comes to implementing a practical, functional application of a color input.

Remember, a lot of this is ripe territory to reach out to browser vendors and let them know how they can update their implementations based on your reported use cases. Here are the three tickets I mentioned earlier where you can either chime in or reference to create a new ticket:

The post Color Inputs: A Deep Dive into Cross-Browser Differences appeared first on CSS-Tricks.

CSS-Tricks on Flywheel

Post pobrano z: CSS-Tricks on Flywheel

I first heard of Flywheel through their product Local, which is a native app for working on WordPress sites. If you ask around for what people use for that kind of work, you’ll get all sorts of answers, but an awful lot of very strong recommendations for Local. I’ve become one of them! We ultimately did a sponsored post for Local, but that’s based on the fact that now 100% of my local WordPress development work is done using it and I’m very happy with it.

Now I’ve taken the next step and moved all my production sites to Flywheel hosting!

Full disclosure here, Flywheel is now a sponsor of CSS-Tricks. I’ve been wanting to work with them for a while. I’ve been out to visit them in Omaha! (👋 at Jamie, Christi, Karissa, and everybody I’ve worked with over there.) Part of our deal includes the hosting. But I was a paying customer and user of Flywheel before this on some sites, and my good experiences there are what made me want to get this sponsorship partnership cooking! There has been big recent news that Flywheel was acquired by WP Engine. I’m also a fan of WP Engine, being also a premium WordPress host that has done innovative things with hosting, so I’m optimistic that a real WordPress hosting powerhouse is being formed and I’ve got my sites in the right place.

Developing on Local is a breeze

It feels like a breath of fresh air to me, as running all the dev dependencies for WordPress has forever been a bit of a pain in the butt. Sometimes you have it going fine, but then something breaks in the most inscrutable possible way and it takes forever to get going again. Whatever, you know what I mean. At this point, I’ve been running Local for over a year and have had almost no issues with it.

There are all kinds of features worth checking out here. Here’s one that is very likely useful to bigger teams. Say you have a Flywheel account with a bunch of production sites on it. Then a new person starts working with you and they have their own computer. You connect Local to Flywheel, and you can pull down the site and have it ready to work on. That’s pretty sweet.

Local doesn’t lock you into anything either. You can use Local for local development and literally use nothing else. Local can push a site up to Flywheel hosting too, which I’ve found to be mighty useful particularly for that first deployment of a new site, but you don’t have to use that if you don’t want. I’ll cover more about workflow below.

Other features that I find worthy of note:

  • Spinning up a new site takes just a second. A quick walkthrough through a wizard where they ask you some login details but otherwise offer smart-but-customizable defaults.
  • Dealing with HTTPS locally is easy. It will create a certificate for you and trust it locally with one click.
  • You can flip on „Live Link”, which uses ngrok to create a live, sharable URL to your localhost site. Great for temporarily showing a client or co-worker something without having to move anything.
  • One click to pop open the database in Sequel Pro, my favorite free database tool. Much easier than trying to spin up phpMyAdmin or whatever on the web to manage from there.

Flywheel’s Dashboard is so clear

I love the simple UI of Local, and I really like how that same design and spirit carries over into the Flywheel hosting dashboard.

There are so many things the dashboard makes easy:

  • You need an SSL cert? Click some buttons.
  • Wanna force HTTPS? Flip a switch.
  • Wanna convert the site to Multisite? Hit a button.
  • Need to edit the database? There is a UI around it built in.
  • Want a CDN? Toggle a thing.
  • Need to invite a collaborator on a site? Go for it.
  • Need a backup? There are in there, download it or restore to that point.

It’s a big deal when everything is simple and works. It means you aren’t burning hours fighting with tools and you can use them doing work that pushes you forward.

Workflow

When I set up my new CSS-Tricks workflow, I had Flywheel move the site for me (thanks gang!) (no special treatment either, they’ll do that for anybody).

I’ve got Local already, so my local development process is the same. But I needed to update my deployment workflow for the new hosting. Local can push a site up to Flywheel hosting, but it just zips everything up and sends it all up. Great for first deployment but not perfect for tiny little changes like 95% of the work I do. There is a new Local for Teams feature, which uses what they call MagicSync for deployment, which only deploys changed files. That’s very cool, but I like working with a Git-based system, where ultimately merges to master are what trigger deployment of the changed files.

For years I’ve used Beanstalk for Git-based deployment over SFTP. I still am using Beanstalk for many sites and think it’s a great choice, but Beanstalk has the limitation that the Git-repo is basically a private Git repo hosted by Beanstalk itself.

During this change, I needed to switch up what the root of the repo is (more on that in a second) so I wanted to create a new repo. I figured rather than doing that on Beanstalk, I’d make a private GitHub repo and set up deployment from there. There are services like DeployHQ and DeployBot that will work well for that, but I went with Buddy, which has a really really nice UI for managing all this stuff, and is capable of much more than just deployment should I ultimately need that.

Regarding the repo itself, one thing that I’ve always done with my WordPress sites is just make the repo the whole damn thing starting at the root. I think it’s just a legacy/comfort thing. I had some files at the root I wanted to deploy along with everything else and that seemed like the easiest way. In WordPress-land, this isn’t usually how it’s done. It’s more common to have the /wp-content/ folder be the root of the repo, as those are essentially the only files unique to your installation. I can imagine setups where even down to individual themes are repos and deployed alone.

I figured I’d get on board with a more scoped deployment, but also, I didn’t have much of a choice. Flywheel literally locks down all WordPress core files, so if your deployment system tries to override them, it will just fail. That actually sounds great to me. There is no reason anyone from the outside should alter those files, might as well totally remove it as an attack vector. Flywheel itself keeps the WordPress version up to date. So I made a new repo with /wp-content/ at the root, and I figured I’d make it on GitHub instead just because that’s such an obvious hub of developer activity and keeps my options wide open for deployment choices.

Maybe I’ll open source it all one day when I’ve had a chance to comb through it.

For the same kind of spiritual reasons, during the the move, I moved the DNS over to Cloudflare. This gives me control over DNS from a third-party so it’s easy for me to point things where I need them. Kind of a decentralization of concerns. That’s not for everyone, but it’s great for me on this project. While now I might suffer from Cloudflare outages (rare, but it literally just happened), I benefit from all sorts of additional security and performance that Cloudflare can provide.

So the workflow is Local > GitHub > Buddy > Flywheel.

And the hosting is Cloudflare > Flywheel with image assets on Cloudinary.

And I’ve got backups from both Flywheel and Jetpack/VaultPress.

The post CSS-Tricks on Flywheel appeared first on CSS-Tricks.

Five Methods for Five-Star Ratings

Post pobrano z: Five Methods for Five-Star Ratings

In the world of likes and social statistics, reviews are very important method for leaving feedback. Users often like to know the opinions of others before deciding on items to purchase themselves, or even articles to read, movies to see, or restaurants to dine.

Developers often struggle with with reviews — it is common to see inaccessible and over-complicated implementations. Hey, CSS-Tricks has a snippet for one that’s now bordering on a decade.

Let’s walk through new, accessible and maintainable approaches for this classic design pattern. Our goal will be to define the requirements and then take a journey on the thought-process and considerations for how to implement them.

Scoping the work

Did you know that using stars as a rating dates all the way back to 1844 when they were first used to rate restaurants in Murray’s Handbooks for Travellers — and later popularized by Michelin Guides in 1931 as a three-star system? There’s a lot of history there, so no wonder it’s something we’re used to seeing!

There are a couple of good reasons why they’ve stood the test of time:

  1. Clear visuals (in the form of five hollow or filled stars in a row)
  2. A straightforward label (that provides an accessible description, like aria-label)

When we implement it on the web, it is important that we focus meeting both of those outcomes.

It is also important to implement features like this in the most versatile way possible. That means we should reach for HTML and CSS as much as possible and try to avoid JavaScript where we can. And that’s because:

  1. JavaScript solutions will always differ per framework. Patterns that are typical in vanilla JavaScript might be anti-patterns in frameworks (e.g. React prohibits direct document manipulation).
  2. Languages like JavaScript evolve fast, which is great for community, but not so great articles like this. We want a solution that’s maintainable and relevant for the long haul, so we should base our decisions on consistent, stable tooling.

Methods for creating the visuals

One of the many wonderful things about CSS is that there are often many ways to write the same thing. Well, the same thing goes for how we can tackle drawing stars. There are five options that I see:

  • Using an image file
  • Using a background image
  • Using SVG to draw the shape
  • Using CSS to draw the shape
  • Using Unicode symbols

Which one to choose? It depends. Let’s check them all out.

Method 1: Using an image file

Using images means creating elements — at least 5 of them to be exact. Even if we’re calling the same image file for each star in a five-star rating, that’s five total requests. What are the consequences of that?

  1. More DOM nodes make document structure more complex, which could cause a slower page paint. The elements themselves need to render as well, which means either the server response time (if SSR) or the main thread generation (if we’re working in a SPA) has to increase. That doesn’t even account for the rendering logic that has to be implemented.
  2. It does not handle fractional ratings, say 2.3 stars out of 5. That would require a second group of duplicated elements masked with clip-path on top of them. This increases the document’s complexity by a minimum of seven more DOM nodes, and potentially tens of additional CSS property declarations.
  3. Optimized performance ought to consider how images are loaded and implementing something like lazy-loading) for off-screen images becomes increasingly harder when repeated elements like this are added to the mix.
  4. It makes a request, which means that caching TTLs should be configured in order to achieve an instantaneous second image load. However, even if this is configured correctly, the first load will still suffer because TTFB awaits from the server. Prefetch, pre-connect techniques or the service-worker should be considered in order to optimize the first load of the image.
  5. It creates minimum of five non-meaningful elements for a screen reader. As we discussed earlier, the label is more important than the image itself. There is no reason to leave them in the DOM because they add no meaning to the rating — they are just a common visual.
  6. The images might be a part of manageable media, which means content managers will be able to change the star appearance at any time, even if it’s incorrect.
  7. It allows for a versatile appearance of the star, however the active state might only be similar to the initial state. It’s not possible to change the image src attribute without JavaScript and that’s something we’re trying to avoid.

Wondering how the HTML structure might look? Probably something like this:

<div class="Rating" aria-label="Rating of this item is 3 out of 5">
  <img src="/static/assets/star.png" class="Rating--Star Rating--Star__active">
  <img src="/static/assets/star.png" class="Rating--Star Rating--Star__active">
  <img src="/static/assets/star.png" class="Rating--Star Rating--Star__active">
  <img src="/static/assets/star.png" class="Rating--Star">
  <img src="/static/assets/star.png" class="Rating--Star">
</div>

In order to change the appearance of those stars, we can use multiple CSS properties. For example:

.Rating--Star {
  filter: grayscale(100%); // maybe we want stars to become grey if inactive
  opacity: .3; // maybe we want stars to become opaque
}

An additional benefit of this method is that the <img> element is set to inline-block by default, so it takes a little bit less styling to position them in a single line.

Accessibility: ★★☆☆☆
Management: ★★★★☆
Performance: ★☆☆☆☆
Maintenance: ★★★★☆
Overall: ★★☆☆☆

Method 2: Using a background image

This was once a fairly common implementation. That said, it still has its pros and cons.

For example:

  1. Sure, it’s only a single server request which alleviates a lot of caching needs. At the same time, we now have to wait for three additional events before displaying the stars: That would be (1) the CSS to download, (2) the CSSOM to parse, and (3) the image itself to download.
  2. It’s super easy to change the state of a star from empty to filled since all we’re really doing is changing the position of a background image. However, having to crack open an image editor and re-upload the file anytime a change is needed in the actual appearance of the stars is not the most ideal thing as far as maintenance goes.
  3. We can use CSS properties like background-repeat property and clip-path to reduce the number of DOM nodes. We could, in a sense, use a single element to make this work. On the other hand, it’s not great that we don’t technically have good accessible markup to identify the images to screen readers and have the stars be recognized as inputs. Well, not easily.

In my opinion, background images are probably best used complex star appearances where neither CSS not SVG suffice to get the exact styling down. Otherwise, using background images still presents a lot of compromises.

Accessibility: ★★★☆☆
Management: ★★★★☆
Performance: ★★☆☆☆
Maintenance: ★★★☆☆
Overall: ★★★☆☆

Method 3: Using SVG to draw the shape

SVG is great! It has a lot of the same custom drawing benefits as raster images but doesn’t require a server call if it’s inlined because, well, it’s simply code!

We could inline five stars into HTML, but we can do better than that, right? Chris has shown us a nice approach that allows us to provide the SVG markup for a single shape as a <symbol> and call it multiple times with with <use>.

<!-- Draw the star as a symbol and remove it from view -->
<svg xmlns="http://www.w3.org/2000/svg" style="display: none;">
  <symbol id="star" viewBox="214.7 0 182.6 792">
    <!-- <path>s and whatever other shapes in here -->  
  </symbol>
</svg>
    
<!-- Then use anywhere and as many times as we want! -->
<svg class="icon">
  <use xlink:href="#star" />
</svg>

<svg class="icon">
  <use xlink:href="#star" />
</svg>

<svg class="icon">
  <use xlink:href="#star" />
</svg>

<svg class="icon">
  <use xlink:href="#star" />
</svg>

<svg class="icon">
  <use xlink:href="#star" />
</svg>

What are the benefits? Well, we’re talking zero requests, cleaner HTML, no worries about pixelation, and accessible attributes right out of the box. Plus, we’ve got the flexibility to use the stars anywhere and the scale to use them as many times as we want with no additional penalties on performance. Score!

The ultimate benefit is that this doesn’t require additional overhead, either. For example, we don’t need a build process to make this happen and there’s no reliance on additional image editing software to make further changes down the road (though, let’s be honest, it does help).

Accessibility: ★★★★★
Management: ★★☆☆☆
Performance: ★★★★★
Maintenance: ★★★★☆
Overall: ★★★★☆

Method 4: Using CSS to draw the shape

This method is very similar to background-image method, though improves on it by optimizing drawing the shape with CSS properties rather than making a call for an image. We might think of CSS as styling elements with borders, fonts and other stuff, but it’s capable of producing ome pretty complex artwork as well. Just look at Diana Smith’s now-famous “Francine” portrait.

Francine, a CSS replica of an oil painting done in CSS by Diana Smith (Source)

We’re not going to get that crazy, but you can see where we’re going with this. In fact, there’s already a nice demo of a CSS star shape right here on CSS-Tricks.

See the Pen
Five stars!
by Geoff Graham (@geoffgraham)
on CodePen.

Or, hey, we can get a little more crafty by using the clip-path property to draw a five-point polygon. Even less CSS! But, buyer beware, because your cross-browser support mileage may vary.

See the Pen
5 Clipped Stars!
by Geoff Graham (@geoffgraham)
on CodePen.

Accessibility: ★★★★★
Manangement: ★★☆☆☆
Performance: ★★★★★
Maintenance: ★★☆☆☆
Overall: ★★★☆☆

Method 5: Using Unicode symbols

This method is very nice, but very limited in terms of appearance. Why? Because the appearance of the star is set in stone as a Unicode character. But, hey, there are variations for a filled star (★) and an empty star (☆) which is exactly what we need!

Unicode characters are something you can either copy and paste directly into the HTML:

See the Pen
Unicode Stars!
by Geoff Graham (@geoffgraham)
on CodePen.

We can use font, color, width, height, and other properties to size and style things up a bit, but not a whole lot of flexibility here. But this is perhaps the most basic HTML approach of the bunch that it almost seems too obvious.

Instead, we can move the content into the CSS as a pseudo-element. That unleashes additional styling capabilities, including using custom properties to fill the stars fractionally:

See the Pen
Tiny but accessible 5 star rating
by Fred Genkin (@FredGenkin)
on CodePen.

Let’s break this last example down a bit more because it winds up taking the best benefits from other methods and splices them into a single solution with very little drawback while meeting all of our requirements.

Let’s start with HTML. there’s a single element that makes no calls to the server while maintaining accessibility:

<div class="stars" style="--rating: 2.3;" aria-label="Rating of this product is 2.3 out of 5."></div>

As you may see, the rating value is passed as an inlined custom CSS property (--rating). This means there is no additional rendering logic required, except for displaying the same rating value in the label for better accessibility.

Let’s take a look at that custom property. It’s actually a conversion from a value value to a percentage that’s handled in the CSS using the calc() function:

--percent: calc(var(--rating) / 5 * 100%);

I chose to go this route because CSS properties — like width and linear-gradient — do not accept <number> values. They accept <length> and <percentage> instead and have specific units in them, like % and px, em. Initially, the rating value is a float, which is a <number> type. Using this conversion helps ensure we can use the values in a number of ways.

Filling the stars may sound tough, but turns out to be quite simple. We need a linear-gradient background to create hard color stops where the gold-colored fill should end:

background: linear-gradient(90deg,
  var(--star-background) var(--percent), 
  var(--star-color) var(--percent)
);

Note that I am using custom variables for colors because I want the styles to be easily adjustable. Because custom properties are inherited from the parent elements styles, you can define them once on the :root element and then override in an element wrapper. Here’s what I put in the root:

:root {
  --star-size: 60px;
  --star-color: #fff;
  --star-background: #fc0;
}

The last thing I did was clip the background to the shape of the text so that the background gradient takes the shape of the stars. Think of the Unicode stars as stencils that we use to cut out the shape of stars from the background color. Or like a cookie cutters in the shape of stars that are mashed right into the dough:

-webkit-background-clip: text;
-webkit-text-fill-color: transparent;

The browser support for background clipping and text fills is pretty darn good. IE11 is the only holdout.

Accessibility: ★★★★★
Management: ★★☆☆☆
Performance: ★★★★★
Maintenance: ★★★★★
Overall: ★★★★★

Final thoughts

Image Files Background Image SVG CSS Shapes Unicode Symbols
Accessibility ★★☆☆☆ ★★★☆☆ ★★★★★ ★★★★★ ★★★★★
Management ★★★★☆ ★★★★☆ ★★☆☆☆ ★★☆☆☆ ★★☆☆☆
Performance ★☆☆☆☆ ★★☆☆☆ ★★★★★ ★★★★★ ★★★★★
Maintenance ★★★★☆ ★★★☆☆ ★★★★☆ ★★☆☆☆ ★★★★★
Overall ★★☆☆☆ ★★★☆☆ ★★★★☆ ★★★☆☆ ★★★★★

Of the five methods we covered, two are my favorites: using SVG (Method 3) and using Unicode characters in pseudo-elements (Method 5). There are definitely use cases where a background image makes a lot of sense, but that seems best evaluated case-by-case as opposed to a go-to solution.

You have to always consider all the benefits and downsides of a specific method. This is, in my opinion, is the beauty of front-end development! There are multiple ways to go, and proper experience is required to implement features efficiently.

The post Five Methods for Five-Star Ratings appeared first on CSS-Tricks.