I’ve just produced the Disposition of Comments
for the Media Queries Level 4 specification.
A DoC is a W3C document
whose goal is to represent that a work-in-progress specification has been widely reviewed,
not only by members of the working group who writes it,
but also by other relevant working groups as well as by the general public,
and that these comments have all been formally addressed.
Having received many comments from a diverse audience,
and having addressed them,
is a a key part of going from an interesting idea to a world wide standard.
Just as when I prepared the last DoC for CSS-UI-3,
or the one before,
or the DoC for CSS-CONTAIN,
it proved to be a useful exercise,
beyond merely demonstrating wide review.
Every time, I find relevant comments that had been made a long time ago,
but had been forgotten before reaching a conclusion,
sometimes after having been discussed for a while,
sometime never having been noticed at all.
Preparing a DoC gives us a chance to find and address these comments.
However, every time, one aspect of the DoC strikes me as odd and outdated.
The DoC is for a specific draft, traditionally a LCWD,
the last one before publication as a Candidate Recommendation.
This makes a lot of sense when drafts are made in private then revealed to the world,
then we get comments and address them, and repeat.
However, we do all our work in public,
and continuously take in comments from both members and the general public.
We no longer have an LCWD under the new process.
We are increasingly in a process where we publish early publish often.
Under such a process, the last draft before CR is likely to be barely different from the one before it,
which will also be similar to the one before it, etc.
Showing wide review of the last draft is not very useful.
A well managed document will have received wide review spread over many iterations,
but the last draft will most likely not have received a lot of comments,
even if many people read it,
since the issues will have been ironed out before we decide we’re ready to transition to CR.
Actually, if a document has been well handled under the new process,
the last draft before CR should barely receive any comment,
since everything that can reasonably be discovered
other than by trying to implement and pass a test suite
should have been addressed already.
In practice, I believe that most recent DoCs have integrated that,
and often cover a period longer than just the last draft before CR.
However, partly because of habits, and partly because of the tooling used to prepare these documents,
they all claim to be about a particular draft.
That’s not helpful, and I think we should change that.
Going forward, DoC should declare not which draft it covers,
but which period of time,
and give a short justification for the starting point.
I do not think it is particularly useful for the DoC to cover
the early stages of a specification
when the overall design is still in flux,
as many comments are invalidated by large rewrites of important parts.
Different specifications mature at different speeds,
so I do not think there will be a universal answer for when DoCs should start.
The FPWD is probably a good guideline,
as it typically signals the point where there’s agreement in the Working Group
about the general design and where we start ironing out the details.
As the W3C Process does not impose any particular form
for these DoC,
all we need to start is to agree to start writing DoC that way,
and for tools like bikeshed and Lea’s (unnamed?) tool to support this new style.
Whole books could probably be written about this,
but here’s a little primer about
how things are today, why it is a hard problem,
and how there’s hope that it is going to get better.
TL;DR: If you want to use contenteditable now,
don’t do it directly and instead use a pre-made javascript editor,
such as CKEditor,
tiny MCE, and the like.
If they don’t do what you want,
and you need to do this yourself now,
be prepared for a lot of pain,
or for waiting for newer standards to stabilise, or both.
Now let’s dive in.
Contenteditable is an attempt at having a high level construct
that would enable rich text editing in web pages,
letting browsers do all the heavy lifting,
and letting the user (via typing, keyboard shortcuts, contextual menus…)
or the javascript (via invocations of execCommand) just ask for these things to happen.
There are a ton of entangled reasons why this is complex,
but just to get a sense of it,
here is a contrived example.
You can try playing with it here but I encourage you to think through it before trying:
Got that?
Now the user creates a selection that goes from
the last cell of the last row of the first table
to the second cell in the first row of the second table.
Then they press “a” on the keyboard.
Generally, selecting something and then typing means
replacing the selection with what was typed,
but in this case, what does that mean?
Should the browser merge the two tables?
If not, which of the two tables does the “a” go into,
and which of the table cells?
What happens to the other cells?
Are they deleted,
or do they still exist but their content is deleted?
Or do they get merged using colspan?
If you do merge the two tables, how do you do that?
Naively remove the markup that corresponds to the selection?
Make it into a 5 column table?
8 columns?
What happens to the alternating background color?
Does anything depends on whether the tables were laid out
below each other (display: table)
or besides each other (display: inline-table)?
What happens to the borders?
Did it make a difference if the first cell of the second table
was styled with user-select: none?
What about if were contenteditable=false instead?
What font and font-weight shall the inserted “a” use?
What if instead of typing “a” you try and paste from the clipboard
after copying the 3rd to 7th items of the list?
Does it affect whether the tables get merged?
Do you preserve the numbering?
What background do you get?
Do you get a border?
How about the font?
Does the same thing happen if you copy it from one browser (e.g. Firefox)
and paste into another one (e.g. Chrome)?
Would it make a difference if the styles were inline instead of cascaded?
…
There’s a million subtleties like this,
many of which don’t have an obvious correct answer,
as it depends what you’re trying to do.
The end result is that browsers are full of bugs
and are inconsistent with each other,
that the specs (ContentEditableTrue
and execCommand)
don’t cover all the cases and aren’t followed particularly closely by the browsers anyway.
Even if that was solved and everybody harmonised on one behaviour
(which isn’t happening, as browsers have mostly given up),
it still wouldn’t be good enough,
because as a user maybe that harmonised behaviour is not the one you wanted,
and now you want a separate method
or way to opt into that alternative behavior.
So web-based editors (CKEditor, TinyMCE, google docs…) go to great lengths
to work around contenteditable,
instead of using it.
For example they do live DOM diffing,
to try and figure out what contenteditable did to the document and for what reason,
undo it, and do it again in a different way.
What people are working on now
(with Johannes as a spec editor)
is a completely different approach,
where the browser does not do the heavy lifting,
and instead, just provides events to inform a javascript based editor
about what it is that the user is trying to do,
and APIs to facilitate doing that.
Step 1 in that story (which is reasonably far along)
is to make sure that everything that would cause a change in a contenteditable element
fires a Javascript event before that change occurs, which:
informs the javascript about the user intent
allows the javascript to cancel the behavior the browser was about to provide,
ensuring that nothing is changed in the contenteditable
Step 2 in that story is to provide multiple modes of contenteditable,
where contenteditable=true is the one we know today, kept for legacy reasons,
but other contenteditable=[something else than true or false] provide modes
where all the events described in step 1 still fire,
the insertion caret is still drawn,
but depending on the mode,
some of the events do not have a default action provided by the browser,
and unless js reacts to them, nothing happens at all.
contentEditable=false:
the element is not editable
contentEditable=events:
the caret is drawn
the events fire
nothing happens unless js reacts to the events
contentEditable=caret:
the caret is drawn
the caret can be moved by the user
the events fire
nothing else happens unless js reacts to the events
contentEditable=typing:
the caret is drawn
the caret can be moved by the user
the events fire
the user typing something when nothing is selected will insert text
the user attempting to move the caret will move the caret
IME-based composition of text works
nothing else happens unless js reacts to the events:
deletion (including the cut part of cut and paste) does nothing but fire an event
the paste part of cut/copy and paste does nothing but fire an event
replacement (select something then type) does nothing but fire an event
formatting commands (Ctrl+B to make something bold) does nothing but fire an event
…
contentEditable=true:
the caret is drawn
the caret can be moved by the user
the events fire
the browser has—and unless cancelled, applies—a default behaviour for all events
I put my physicist training to use. As a scientist, you are supposed to approach problems using a simple process:
Form a hypothesis that matches everything you know so far.
Make some testable predictions using this hypothesis.
Test reality to see if your predictions can be disproved.
Repeat those steps until you can no longer find
anything in reality that disagrees with what your hypothesis predicts.
Publish your findings, along with all the data you collected,
and the hypothesis you ended up with.
Wait for someone to find a prediction that your hypothesis makes and which doesn’t match reality.
Repeat the entire process.
When reverse-engineering a single implementation,
this is a very sound way to proceed,
and the number of times you have to run through step 3
is why is love jsbin so much these days.
However, there’s one important complication
when what you’re trying to reverse-engineer and specify has not one,
but several implementations.
As you run steps 1–4,
occasionally you find that some implementations match your predictions but that some others do not.
Even though it shows that there not yet complete interoperability,
the interesting thing is that this can sometimes be a good news.
If something has been in the market for a while
with multiple implementations behaving interoperably,
there’s a very good chance the web at large now depends
on that specific behavior.
That does not necessarily means web developers like the way it works.
Maybe they do, or maybe the feature is terribly designed and
they have to resolve to elaborate workarounds to get it to do what they want.
But these workarounds are written, deployed,
and depend on the behavior every browser agrees about
and would likely break if they were to change.
When browsers do behave differently
— assuming the difference is not limited to browsers with negligible market share —
authors generally cannot rely on any particular behavior.
So they don’t, and hardly any web site depends on browsers staying the same.
This means we have an opportunity for making improvements.
If it’s some obscure detail of the feature that doesn’t really matter,
it is sometimes just as well to document it as undefined behavior
and move on.
Getting every vendor to align on something is costly;
it’s important to pick the right battles and not waste time on insignificant things.
We can alway come back to it later anyway.
If it is some aspect of the feature that does matter,
I get to take off my scientist hat and put on a more judgemental one:
which the variants I’ve uncovered makes the most sense?
Is there one that solves the problem better,
or solves more problems,
or fits better with how everything else works?
Which one do I like best?
Occasionally, none of these disagreeing implementations is particularly good.
Browser engineers are generally capable people but sometimes they’re in a hurry.
Or maybe they came up with this years before other features
which now conflict with their design were added.
With the benefit of being able to learn from their mistakes,
maybe I’ll be able to come up with something better.
Regardless of whether what I want to go
with what one of the browsers did or something I made up,
I still have to try to convince all the implementers that will need
to change that this is the right thing to do.
Although browser vendors can sometimes be uncooperative
and just drag feet until others agree to match their implementations,
they generally do want the best for the web and will try to accommodate each other.
It is also fairly common that these discussion will uncovered some
aspect of the problem I had not yet noticed
and throw me back either at the science lab to find out what is really happening
or at the drawing board to come up with a better solution.
Having gone through many such cycles,
the user-select property
is now a hybrid that matches different browsers on different aspects.
The lack of inheritance and behavior auto value
are in line with Microsoft’s implementation.
The none value on the other hand was superficially the same in all browsers
but where they differed the specification now rules in favor of Firefox’s approach…
And so on for various other parts of the feature.
J’ai voté pour vous lors des précédentes législatives.
Votre action parlementaire m’avait conforté dans mon choix,
et je me suis félicité de votre entrée au gouvernement.
Le numérique est un enjeu sociétal et économique majeur,
et vous me semblez avoir les compétences et les convictions
idéales pour mener à bien cette mission.
Alors que le projet de loi renseignement vient de passer à l’assemblée nationale,
je vous écrit aujourd’hui pour vous faire part de ma profonde déception.
Déçu par la majorité qui soutient cette aberration.
Déçu par l’Assemblée Nationale
dont je ne sais si l’absentéisme s’explique par le manque de courage politique,
ou par le manque de compréhension des enjeux.
Déçu par le gouvernement,
qui profite des tragiques événements de janvier
pour justifier cette entorse à la liberté.
Déçu par vous: élue députée de ma circonscription en 2012,
secrétaire nationale aux droits de l’homme du parti socialiste jusqu’à récemment,
membre du gouvernement,
vous incarnez les trois.
Sur un sujet combinant numérique et liberté,
on pouvait beaucoup attendre de vous.
En vain.
Ce qu’Edward Snowden a révélé aurait dut être un avertissement,
pas une source d’inspiration.
Je ne sais pas si vous vous êtes opposée à cette loi
et avez échoué à faire valoir la raison,
ou si défendiez vous-même cette loi liberticide,
néfaste économiquement
et inefficace en matière de sécurité.
Peut-être est-ce dans l’esprit de la célèbre phrase de monsieur Chevènement
que l’on ne vous a pas entendu vous opposer à ce projet,
et que vous avez agi à l’interieur du gouvernement.
Même si c’était le cas,
je ne trouverais pas grand réconfort à voir le peu d’effet
sur un dossier qui devrait être au cœur même de votre action.
Il n’est jamais trop tard pour bien faire,
et j’espère que vous aurez la sagesse et la capacité
de remettre le gouvernement dans le droit chemin,
pour épargner à la France cette loi,
qui bien au contraire de protéger notre république,
lui donne plutôt de reflets orwelliens.
I’ve recently been working on the draft of a new CSS specification:
CSS-UI-4.
One of the features specified there is the user-select property.
This isn’t really a new feature:
it first appeared in User Interface for CSS3 back in 1999,
before being rejected and therefore not included it the specification that replaced it, CSS-UI-3.
However, despite the initial rejection,
browser vendors have experimented with it,
and over the years, it has found a public.
There are still many interoperability problems,
but hopefully the specification will help resolve them.
Quite a few people have written about this property,
usually to speak about user-select: none,
and occasionally about user-select: element.
Here is a good and recent article(Update: the domain has expired redirects to some spam, so I’ve removed the link)
by Alex Muir
written in celebration of user-select support
reaching 90% of browsers according to caniuse.com.
I’d like to introduce a lesser-known ability of this property
that is a small but easy win for usability:
user-select: all.
After applying it to a piece of content,
if a user tries to select any part of that content,
all of it will be selected.
What is that good for?
Assume you have a piece of content in your document
which is mainly used by being copy and pasted around,
and needs to be kept in one piece.
Typical examples could be
an ID,
an invoice number,
a coupon code,
a checksum…
If you do nothing,
occasionally users trying to select it for copy and pasting will aim poorly,
and for example start their selection on the second character of the string,
rather than the first.
You can make their life easier by using
user-select: all.
Here’s an example:
Unfortunately, this currently only works in Firefox and Safari,
but since it causes no harm to users of other browsers,
why not include it anyway?
While working on the CSS directional navigation properties,
the question of how to activate spacial navigation has come up a few times,
so I thought I’d just put it out there to have something to point to.
While this is a common feature in TV browsers or feature phone browsers,
it is less well known that several desktop browsers
(and desktop emulators of non-desktop browsers) support spatial navigation.
This allows the user to move the focus between elements of the web page
in 2D, which is often more convenient that the linear keyboard navigation
enabled by the Tab key.
The aforementioned directional navigation properties
give the author extra control over how this works.
You can read more about it
in this article by Daniel Davis.
But even if the author does nothing, spatial navigation is still a big usability boost for keyboard users.
Here is a list of browsers that support spatial navigation,
and how to turn it on in each of them:
The Blink based browsers come with spatial navigation built-in, but turned off by default.
Launch from the command-line with the --enable-spatial-navigation argument to activate it
(on OS X, use open path/to/browser.app --args --enable-spatial-navigation),
then use the arrow keys.
This does not include support for the directional navigation properties.
This is probably the most mature implementation of spatial navigation,
and includes support for the directional navigation properties.
Hold down the Shift key and press the arrow keys.
Vivaldi is a new feature rich browser,
that aims to become the spiritual heir to what Opera used to be
before the switch to Blink and the UI rewrite.
It is the only actively developed desktop browser I know of
to offer spatial navigation out of the box.
As in Opera, hold down the Shift key and press the arrow keys.
At the time of writing, this is still very fragile,
and does not include support for the directional navigation properties.
Being based on Presto,
this works well and includes support for the directional navigation properties.
Use the arrow keys.
Alternatively, you can connect to a simulated remote control on http://localhost:5555
(See the Opera TV Emulator User Guide for details).
If you know of other browsers running on desktop and supporting spatnav,
let me
know,
and I’ll update this post.