Monday, September 18, 2017

Reflection on trip to Kiel

On Sunday, I flew home from my trip to Kiel, Germany. I was there for the Kieler Open Source und LinuxTage, September 15 and 16. It was a great conference! I wanted to share a few details while they are still fresh in my mind:

I gave a plenary keynote presentation about FreeDOS! I'll admit I was a little concerned that people wouldn't find "DOS" an interesting topic in 2017, but everyone was really engaged. I got a lot of questions—so many that we had to wrap up before I could answer all the questions.

FreeDOS has been around for a long time. We started FreeDOS in 1994, when I was still an undergraduate physics student. I loved DOS at the time, and I was upset that Microsoft planned to eliminate DOS when they released the next version of Windows. If you remember, the then-current version was Windows 3.1, and it wasn't great. And Windows's history up to this point wasn't promising: Windows 1 looked pretty much like Windows 2, and Windows 2 looked like Windows 3. I decided that if Windows 4 would look anything like Windows 3.1, I wanted nothing to do with it. I preferred DOS to clicking around the clumsy Windows interface. So I decided to create my own version of DOS, compatible with MS-DOS so I could continue to run all my DOS programs.

We recently published a free ebook about the history of FreeDOS. You can find it on our website, at 23 Years of FreeDOS.

My second talk that afternoon was about usability testing in open source software. The crowd was smaller, but they seemed very engaged during the presentation, so that's good.

I talked about how I got started in usability testing in open source software, and focused most of my presentation on the usability testing we've done as part of the Outreachy internships. I highlighted the GNOME usability testing from my interns throughout my participation in Outreachy: Sanskriti, Gina, Renata, Ciarrai, and Diana.

Interesting note: Ciarrai's paper prototype test on the then-proposed Settings redesign will be published this week on OpenSource.com, so watch for that.

The conference recorded both presentations, and they'll be uploadeded to YouTube in the next few days. I'll link to them when they are up.

Sunday, September 17, 2017

Documentation needs usability, too

If you're like most developers, writing documentation is hard. Moreso if you are writing for end-users. How do you approach writing your documentation?

Remember that documentation needs good usability, too. If documentation is too difficult to read—if it's filled with grammatical mistakes, or the vocabulary is just too dense, or even if it's just too long—then few people will bother to read it. Your documentation needs to reach your audience where they are.

Finding the right tone and "level" of writing can be difficult. When I was in my Master's program, I referred to three different styles of writing: "High academic," "Medium academic," and "Low academic."
High academic is typical for many peer-reviewed journals. This writing is often very dense and uses large words that demonstrate the author's command of the field. High academic writing can seem very imposing.

Medium academic is more typical of undergraduate writing. It is less formal than high academic, yet more formal than what you find in the popular press.

Low academic tends to include most professional and trade publications. Low academic authors may sprinkle technical terms here and there, but generally write in a way that's approachable to their audience. Low academic writing uses contractions, although sparingly. Certain other formal writing conventions continue, however. For example, numbers should be written out unless they are measurements; "fifty" instead of "50," and "two-thirds" instead of "2/3." But do use "250 MB" and "1.18 GHz."
In my Master's program, I learned to adjust my writing style according to my instructors' preferences. One professor might have a very formal attitude towards academic writing, so I would use High academic. Another professor might approach the subject more loosely, so I would write in Medium academic. When I translated some of my papers into articles for magazines or trade journals, I wrote in Low academic.

And when I write my own documentation, I usually aim for Low academic. It's a good balance of professional writing that's still easy to read.

To make writing your own documentation easier, you might also consult the Google Developer Documentation Style Guide. Google just released their guide for anyone to use. The guide has guidelines for style and tone, documenting future features, accessible content, and writing for a global audience. The guide also includes details about language, grammar, punctuation, and formatting.

Wednesday, September 13, 2017

On my way to Kieler Open Source und Linux Tage

Just wanted to share a brief update that I'm now on my way to Kiel, Germany for the Kieler Open Source und Linux Tage. I will be sharing two presentations:

The FreeDOS Project: Then and Now

I'll be talking about the history of the FreeDOS Project, and a little about where things are headed. If you don't know about FreeDOS: FreeDOS is a complete, free, DOS-compatible operating system that you can use to play classic DOS games, run legacy business software, or develop embedded systems. Any program that works on MS-DOS should also run on FreeDOS.

Usability Testing in Open Source Software

This presentation is for anyone who works on open source software, and wants to make it easier for everyone to use. I'll talk about some easy methods you can use to test the usability of your software, and then how to quickly identify the "trouble spots" that you need to fix.

If you're planning to attend Kieler, please let me know!
Update: I've made my slides available for download. You can find them on my personal page. Both presentations are available under the Creative Commons Attribution (CC BY).

Saturday, September 9, 2017

23 Years of FreeDOS

On June 29th, 2017, FreeDOS turned 23 years old. There’s nothing special about "23," but I thought it would be great to celebrate the anniversary by having a bunch of past and current users share their stories about why they use FreeDOS. So, I made a call for users to write their own FreeDOS stories.

Many people shared their FreeDOS story, many of which were shared on the FreeDOS Blog. We have collected these stories into a free ebook, 23 Years of FreeDOS. (CC BY 4.0) This ebook contains the voices of many of the users who contributed their stories, as well as the history of FreeDOS.

These stories are written from different perspectives, such as: "How did you discover FreeDOS?" "What do you use FreeDOS for?" and "How do you contribute to FreeDOS?" In short, I requested users to answer the question: "Why FreeDOS?"

Many individuals have helped make FreeDOS what it is, but this ebook represents only a few of them. I hope you enjoy this collection of 23 years of everything FreeDOS!

To download the free ebook, go to 23 Years of FreeDOS (ebook) on the FreeDOS website.

Flat design is harder to understand

Interesting research from the Nielsen Group shows that a flat web design is harder for people to understand. The usability study was conducted against web pages, but the results apply equally well to graphical user interfaces.

First, let me define the "flat" web design: Websites used to use colors on links (usually blue) with underline, and 3D-style buttons. Web designers really didn't have to do anything to make that happen; the standard web styles defines blue as a link color (purple as a visited link color) and any button element will appear in a 3D style (such as beveled edges).

In recent years, it has become fashionable for web designers to "flatten" the website design: links appear like normal paragraph text, and buttons are plain rectangles with no special decoration. Here's a trivial example of a flat web design:
Title text

Hi there! This is some sample text that you might find on a website. Let's say you are on a shopping website, this text might be an item description. Or if you're on a news website, this might be the summary for a news article. And below it, you might have a link for more information.

Click here for more
Looking at that example, do you know what to click on? Do you know that you can click on something? Actually, you can click on the title text or the "Click here for more." Both are links to Google.

These flat user interface elements attract less attention and cause uncertainty, according to Nielsen's research.

The Nielsen article is very interesting, and if you are interested in usability or user interface design (or web design), then I encourage you to read it. The article includes "gaze maps" (heat maps that show where testers looked on the web page) on web pages that used a flat design (weak signifiers) versus a more traditional design (strong signifiers).

It's not all bad, though. A flat design can work in some specific circumstances. From the article: "As we saw in this experiment, the potential negative consequences of weak signifiers are diminished when the site has a low information density, traditional or consistent layout, and places important interactive elements where they stand out from surrounding elements." (emphasis mine) And, "Ideally, to avoid click uncertainty, all three of those criteria should be met, not just one or two."

So your best bet in user interface design is to make sure clickable items look clickable: buttons should have a 3D design, and links should be styled with a different color and underline to look like clickable links instead of regular text.

Thursday, September 7, 2017

Dissecting the Sierpinski Triangle program

Last week, I shared my return to old school programming. My first home computer was an Apple II clone called the Franklin ACE 1000, and it was on this machine that my brother and I taught ourselves how to write programs in AppleSoft BASIC. And last week, I had an itch to return to the Apple II and write a simple BASIC program to generate the Sierpinski Triangle:


Let me briefly show you how to code the Sierpinski Triangle in AppleSoft BASIC. Fortunately, this is a simple program that only uses a few functions and statements:
DIM
Create an array variable

LET
Assigns a value to a variable

FORNEXT
Basic iteration loop structure

PRINT
Display information to the screen

INT()
Return the integer portion of a number

RND()
Return a random number between 0 and 1

And for the graphics, these statements:
GR or HGR
Set standard graphics (GR) mode or high-resolution graphics (HGR) mode

COLOR= or HCOLOR=
Set the color for drawing graphics in GR or HGR mode, respectively

PLOT or HPLOT
Make a single pixel dot in GR or HGR mode, respectively

So with those instructions, I was able to iterate a chaos generation of the Sierpinski Triangle. Again, if you aren't familiar with this method to generate the Sierpinski Triangle, the brief rules are:
  1. Set three points that define a triangle
  2. Randomly select a point anywhere (x,y)
Then:
  1. Randomly select one of the triangle's points
  2. Set the new x,y to be the midpoint between the previous x,y and the triangle point
  3. Repeat
Let's start with the first step, to define the triangle's end points. I used two parallel single-dimension arrays, and assigned values to them. This assigns the points at 0,39 (left,bottom) and 20,0 (middle,top) and 39,39 (right,bottom):
DIM X(3)
DIM Y(3)
LET X(1) = 0
LET X(2) = 20
LET X(3) = 39
LET Y(1) = 39
LET Y(2) = 0
LET Y(3) = 39
Note that in AppleSoft BASIC, arrays start at 1. And of course, all lines must have line numbers, but I won't show those here.

Then I defined a random point anywhere on the board. Since it's random, it doesn't really matter what I pick, so I hard-coded this at 10,10:
LET XX = 10
LET YY = 10
To draw each pixel in the Sierpinski Triangle, we need to be in graphics mode. So I use GR to set standard graphics mode, and set the draw color to blue (7):
GR
COLOR= 7
I chose to make 2,000 iterations on the triangle. Maybe this was overkill for GR mode, but it worked well in HGR mode, so I just left it. You start an iteration loop with the FOR statement that specifies the start and end values, and close the loop with the NEXT statement. Since you can have nested loops, the NEXT statement requires the variable you are iterating:
FOR I = 1 TO 2000
...
NEXT I
Inside the loop, I pick a triangle end point at random, then do some simple math to find the midpoint between the current XX,YY and the end point. AppleSoft BASIC only allows variable names up to two characters long. So here, IX stands for "index."

Technically, RND(1) returns a floating point value from 0 to 0.999…, so multiplying by 3 gives 0 to 2.999…. Adding 1 to this results in a floating point value between 1 and 3.999…. To use this as the index of an array, I want an integer, so I use the INT() function to return just the integer part, resulting in a final integer value between 1 and 3, inclusive:
LET IX = 1 + INT (3 * RND (1))
With that, I can compute the midpoint:
LET XN = INT ((XX + X(IX)) / 2)
LET YN = INT ((YY + Y(IX)) / 2)
Again, since AppleSoft BASIC only allows variable names of up to two characters long, XN represents "X new" and YN represents "Y new."

Then I reassign those values back to XX and YY, then plot the pixel on the screen:
LET XX = XN
LET YY = YN
PLOT XX,YY
And as a way to track the progress of the overall iteration, I print the counter I before starting the loop over again:
PRINT I
And that's how to create a Sierpinski Triangle in AppleSoft BASIC!

(Oops, I used floating point variables throughout, when I should have been using integer variables. My bad. If I were to code this again, I would use variables like XX% instead of XX. That would also obviate the need to use the INT() function, since the value would be automatically cast to integer when saving the value to the integer variable.)


BASIC is a very straightforward language, with only a few statements and essential functions. AppleSoft BASIC was not a very difficult language to learn. Even as a young child, I quickly figured out how to create simple math quizzes. From there, I graduated to larger and more complex programs, including one that effectively emulated the thermonuclear war simulator from the 1983 movie, War Games!

But I'll stop here, and leave it to you to learn more about BASIC on your own. You can find AppleSoft BASIC programming guides in lots of places on the Internet. Landsnail's Apple II Programmer's Reference is a good one.

Saturday, September 2, 2017

Return to old school programming

When my brother and I were growing up, our parents brought home an Apple II personal computer. Actually ours was one of the first Apple "clones," a Franklin ACE 1000, but it ran all of the original Apple software. And more importantly, you could write your own programs with the included AppleSoft BASIC.

My brother and I cracked open the computer guide that came with it, and slowly taught ourselves how to write programs in BASIC. My first attempts were fairly straightforward math quizzes and other simple programs. But as I gained more experience in BASIC, I was able to harness high resolution graphics mode and do all kinds of nifty things.

AppleSoft BASIC was my first programming language. And while I eventually moved on to other compiled languages (I prefer C) and other programming environments, I think I'll always have a soft spot for AppleSoft BASIC.

BASIC was a very simple programming language. Two-letter variable names, line numbers, and other hallmarks were typical for AppleSoft BASIC. But even within these limitations, you could create pretty impressive programs if you were clever.

Recently, I've been spending free time playing around with an Apple II emulator, writing a few simple programs as a "throwback" to that old school programming. The Apple IIjs emulator runs in your web browser and very effectively simulates running an old Apple II computer from the 1980s. You can find other Apple II emulators specifically for Linux.

I want to share a program I recently wrote on Apple IIjs: a chaos generation of the Sierpinski Triangle. If you aren't familiar with this method to generate the Sierpinski Triangle, the brief rules are:

  1. Set three points that define a triangle (A,B,C)
  2. Randomly select a point anywhere (x,y)
Then:
  1. Randomly select one of the triangle's points (A,B,C)
  2. Set the new x,y to be the midpoint between the previous x,y and the triangle point
  3. Repeat

And with this rule set, I created a very simple iteration of the Sierpinski Triangle. This sample uses the standard graphics resolution mode (GR) with 40×40 pixels.


The code to generate this image is fairly straightforward:


Two thousand steps takes forever to run on the simulated 6502 microprocessor, by the way. Just like computing in the 1980s.

For a more interesting view of the Sierpinski Triangle on the Apple II, it helps to switch to a higher resolution. Apple's high resolution mode (HGR) allowed a whopping 280×192 pixels.


This requires changing two lines of code: line 50 sets HGR mode instead of GR mode, and line 140 uses HPLOT instead of PLOT.


I don't have our original Franklin ACE 1000 anymore, or an original Apple II computer, but at least I can return to old school programming whenever I like by using an emulator.

Monday, August 28, 2017

Leadership lessons from open source software

Just wanted to point to an article I recently wrote for CIO Review Magazine, about Leadership lessons from open source software. I've been involved in open source software since I was a university student—as a user, contributor, and maintainer. Today, I'm a chief information officer in local government. While my day job is unrelated to my personal interest in open source software, I find leverage in many of the lessons I learned throughout my history in open source software projects.

My article shares three key lessons from open source software that I’ve carried into my career as chief information officer:
  1. Feedback is a gift
  2. Everyone brings different viewpoints
  3. You don’t have to do it all yourself
Looking for leadership lessons through the lens of unexpected sources can be interesting and insightful. We need to find inspiration from everything we experience. For myself, I like to reflect on what I have done, to find ways to improve myself.

As chief information officer, I leverage many of the lessons I learned from maintaining or contributing to open source software. While I find insights from other areas, experience drives learning, and my twenty years of personal experience in open source software has taught me much about accepting feedback, listening to others, and sharing the burden. This applies directly to my professional career.

Sunday, August 27, 2017

At look back at Linux 1.0

The Linux Kernel is 26 years old this year. And to mark this anniversary, I took a look back at where it all began. You can find my journey into Linux nostalgia over at OpenSource.com.

I discovered Linux in 1993. My first Linux distribution was Softlanding Linux System (SLS) 1.03, with Linux kernel 0.99 alpha patch level 11. That required a whopping 2MB of RAM, or 4MB if you wanted to compile programs, and 8MB to run X windows.

A year later, I upgraded to SLS 1.05, which sported the brand-new Linux kernel 1.0.

Check out the article for some great screenshots from SLS 1.05, including the color-enabled text-mode installer, a few full-screen console applications, and a sample X session with TWM, the tabbed window manager.

Tuesday, August 15, 2017

Happy birthday, GNOME!

The GNOME desktop turns 20 today, and I'm so excited! Twenty years is a major milestone for any open source software project, especially a graphical desktop environment like GNOME that has to appeal to many different users. The 20th anniversary is definitely something to celebrate!

I wrote an article on OpenSource.com about "GNOME at 20: Four reasons it's still my favorite GUI." I encourage you to read it!

In summary: GNOME was a big deal to me because when GNOME first appeared, we really didn't have a free software "desktop" system. The most common desktop environments at the time included FVWM, FVWM95, and their variants like WindowMaker or XFCE, but GNOME was the first complete, integrated "desktop" environment for Linux.

And over time, GNOME has evolved as technology has matured and Linux users demand more from their desktop than simply a system to manage files. GNOME 3 is modern yet familiar, striking that difficult balance between features and utility.

So, why do I still enjoy GNOME today?

  1. It's easy to get to work
  2. Open windows are easy to find
  3. No wasted screen space
  4. The desktop of the future

The article expands on these ideas, and provides a brief history of GNOME throughout the major milestones of GNOME 1, 2, and 3.

Saturday, August 12, 2017

Allan Day on The GNOME Way

If you don't read Allan Day's blog, I encourage you to do so. Allan is one of the designers on the GNOME Design team, and is also a great guy in person. Allan recently presented at GUADEC, the GNOME Users And Developers European Conference, about several key principles in GNOME design concepts. Allan's has turned his talk into a blog post: "The GNOME Way." You should read it.

Allan writes in the introduction: "In what follows, I’m going to summarise what I think are GNOME’s most important principles. It’s a personal list, but it’s also one that I’ve developed after years of working within the GNOME project, as well as talking to other members of the community. If you know the GNOME project, it should be familiar. If you don’t know it so well, it will hopefully help you understand why GNOME is important."

A quick summary of those key principles:

1: GNOME is principled
"Members of the GNOME project don’t just make things up as they go along and they don’t always take the easiest path."

2: software freedom
"GNOME was born out of a concern with software freedom: the desire to create a Free Software desktop. That commitment exists to this day. "

3: inclusive software
"GNOME is committed to making its software usable by as many people as possible. This principle emerged during the project’s early years."

4: high-quality engineering
"GNOME has high standards when it comes to engineering. We expect our software to be well-designed, reliable and performant. We expect our code to be well-written and easy to maintain."

5: we care about the stack
"GNOME cares about the entire system: how it performs, its architecture, its security."

6: take responsibility for the user’s experience
"Taking responsibility means taking quality seriously, and rejecting the “works for me” culture that is so common in open source. It requires testing and QA."

Allan's article is a terrific read for anyone interested in why GNOME is the way it is, and how it came to be. Thanks, Allan!

Tuesday, August 8, 2017

Simplify, Standardize, and Automate

On my Coaching Buttons blog, I sometimes write about "Simplify, Standardize, and Automate." I have reiterated this mantra in my professional career since 2008, when I worked in higher ed. A challenge that constantly faces higher ed is limited budgets; we often had to "do more with less." One way to respond to shrinking budgets was to become more efficient, which we did through a three-pronged approach of simplifying our environment, standardizing our systems, and automating tasks.

The concept of automation was always very important to me. Automation is very powerful. It can remove drudgery work from the shoulders of our staff. By allowing a machine to do repetitive tasks, we free up our staff to do more valuable tasks.

What common tasks do you do every day that could be automated, and turned into a script or program? When I worked in higher ed, I shared this comment about automation:
"If you need a report from the Data Warehouse every month, documenting the steps is certainly a good first step. But it's much better to create a script to generate it for you automatically. The file just appears when you need it, without having to repeat the steps to create it manually. That's less time to manage an individual thing, leaving you more time to work on other tasks."
Kyle Rankin recently wrote at Linux Journal about the importance of automation, part of "Sysadmin 101." Kyle identifies several types of tasks you should automate, including routine and repeatable tasks, then goes on to discuss when you should automate and how you should automate.

If you are a systems administrator, and especially if you are new to systems administration, I encourage you to read Kyle's article. Then, learn about the automation available on your system. I leverage cron and (mostly) Bash scripts on my own Linux systems. I don't have very complex tasks that have dependencies on other jobs, so that works well for me. If you have such a need for more complex automation, you can find them.

Saturday, July 29, 2017

A new GNOME Board

After my Board term expired, I had planned to stay involved with the GNOME Foundation Board of Directors until the official hand-off to the new Board at GUADEC. Since GUADEC is happening right now, this marks the end of my time on the GNOME Board of Directors.

It was great to serve on the GNOME Board this year! I know we accomplished a lot of great things. Among other things, we hired a new Executive Director, who I believe will provide strong leadership for GNOME. The GNOME Board is an important part of governance too, and the Board demonstrated that by keeping GNOME moving forward in the absence of an Executive Director.

I may run for GNOME Board again in a few years, when things settle down for me. It's been a busy time lately, but as things reach a new normal, I'll be able to take on new activities in GNOME.

Good luck to everyone on the Board for the coming year! I know everyone is highly engaged, and that's what really matters for a successful Board.

Friday, July 14, 2017

How I put Linux in the enterprise

I recently wrote an article for OpenSource.com that tells the story about How I introduced my organization to Linux. Here's the short version:

I used to work in higher ed. In the late 1990s, we moved to a new student records system. We created an "add-on" web registration system, so students could register on-line—still a new idea in 1998. But when we finally went live, the load crushed the web servers. No one could register. We tried to fix it, but nothing worked.

Instead, we just shifted everything to Linux, and it worked! No code changes, just a different platform. That was our first time using Linux in the enterprise. When I left the university some seventeen years later, I think about two-thirds of our enterprise servers ran on Linux.

There's a lot going on behind the scenes here, so I encourage you to read the full article. The key takeaways aren't really the move to Linux. Instead, I use this as an example for how to deploy a big change in any environment: Solve a problem, don't stroke an ego. Change as little as possible. Be honest about the risks and benefits. And communicate broadly. These are the keys to success.

Friday, June 30, 2017

FreeDOS is 23 years old

I have been involved in open source software for a long time, since before anyone coined the term "open source." My first introduction to Free software was GNU Emacs on our campus Unix system, when I was an undergraduate. Then I discovered other Free software tools. Through that exposure, I decided to installed Linux on my home computer in 1993. But as great as LInux was at the time, with few applications like word processors and spreadsheets, Linux was still limited—great for writing programs and analysis tools for my physics labs, but not (yet) for writing class papers or playing games.

So my primary system at the time was still MS-DOS. I loved DOS, and had since the 1980s. While the MS-DOS command line was under-powered compared to Unix, I found it very flexible. I wrote my own utilities and tools to expand the MS-DOS command line experience. And of course, I had a bunch of DOS applications and games. I was a DOS "power user." For me, DOS was a great mix of function and features, so that's what I used most of the time.

And while Microsoft Windows was also a thing in the 1990s, if you remember Windows 3.1, you should know that Windows wasn't a great system. Windows was ugly and difficult to use. I preferred to work at the DOS command line, rather than clicking around the primitive graphical user interface offered by Windows.

With this perspective, I was a little distraught to learn in 1994, through Microsoft's interviews with tech magazines, that the next version of Windows would do away with MS-DOS. It seemed MS-DOS was dead. Microsoft wanted everyone to move to Windows. But I thought "If Windows 3.2 or 4.0 is anything like Windows 3.1, I want nothing to do with that."

So in early 1994, I had an idea. Let's create our own version of DOS! And that's what I did.

On June 29, 1994, I made a little announcement to the comp.os.msdos.apps discussion group on Usenet. My post read, in part:
Announcing the first effort to produce a PD-DOS.  I have written up a
"manifest" describing the goals of such a project and an outline of
the work, as well as a "task list" that shows exactly what needs to be
written.  I'll post those here, and let discussion follow.
That announcement of "PD-DOS" or "Public Domain DOS" later grew into the FreeDOS Project that you know today. And today, FreeDOS is now 23 years old!

All this month, we've asked people to share their FreeDOS stories about how they use FreeDOS. You can find them on the FreeDOS blog, including stories from longtime FreeDOS contributors and new users. In addition, we've highlighted several interesting moments in FreeDOS history, including a history of the FreeDOS logo, a timeline of all FreeDOS distributions, an evolution of the FreeDOS website, and more. You can read everything on our celebration page at our blog: Happy 23rd birthday to FreeDOS.

Since we've received so many "FreeDOS story" contributions, I plan to collect them into a free ebook, which we'll make available via the FreeDOS website. We are still collecting FreeDOS stories for the ebook! If you use FreeDOS, and would like to contribute to the ebook, send me your FreeDOS story by Tuesday, July 18.

Monday, June 5, 2017

Help us celebrate 23 years of FreeDOS

This year on June 29, FreeDOS will turn 23 years old. That's pretty good for a legacy 16-bit operating system like DOS. It's interesting to note that we have been doing FreeDOS for longer than MS-DOS was a thing. And we're still going!

There's nothing special about "23 years old" but I thought it would be a good idea to mark this year's anniversary by having people contribute stories about how they use FreeDOS. So over at the FreeDOS Blog, I've started a FreeDOS blog challenge.

If you use FreeDOS, I'm asking you to write a blog post about it. Maybe your story is about how you found FreeDOS. Or about how you use FreeDOS to run certain programs. Or maybe you want to tell a story about how you installed FreeDOS to recover data that was locked away in an old program. There are lots of ways you could write your FreeDOS story. Tell us about how you use FreeDOS!

Your story can be short, or it can be long. Make it as long or short as you need to talk about how you use FreeDOS.

Write your story, post it on your blog, and email me so I can find it. Or if you don't have a blog of your own, email your story to me and I'll put it up as a "guest post" on the FreeDOS Blog.

I'm planning to post a special blog item on June 29 to collect all of these great stories. So you need to write your story by June 28.

Tuesday, May 23, 2017

Please run for GNOME Board

Update: the election is over. Congratulations to the new Board members!
Are you a member of the GNOME Foundation? Please consider running for Board.

Serving on the Board is a great way to contribute to GNOME, and it doesn't take a lot of your time. The GNOME Board of Directors meets every week via a one-hour phone conference to discuss various topics about the GNOME Foundation and GNOME. In addition, individual Board members may volunteer to take on actions from meetings—usually to follow up with someone who asked the Board for action, such as a funding request.

At least two current Board members have decided not to run again this year. (I am one of them.) So if you want to run for the GNOME Foundation Board of Directors, this is an excellent opportunity!

If you are planning on running for the Board, please be aware that the Board meets 2 days before GUADEC begins to do a formal handoff, plan for the upcoming year, and meet with the Advisory Board. GUADEC 2017 is 28 July to 2 August in Manchester, UK. If elected, you should plan on attending meetings this year on 26 and 27 July in Manchester, UK.

To announce your candidacy, just send an email to foundation-announce that gives your name, your affiliation (who you work for), and a few sentences about your background and interest in serving on the Board.

Friday, May 19, 2017

Can't make GUADEC this year

This year, the GNOME Users And Developers European Conference (GUADEC) will be hosted in beautiful Manchester, UK between 28th July and 2nd August. Unfortunately, I can't make it. I missed last year, too. The timing is not great for me.

I work in local government, and just like last year, GUADEC falls during our budget time at the county. Our county budget is set every two years. That means during an "on" year, we make our budget proposals for the next two years. In the "off" year, we share a budget status.

I missed GUADEC last year because I was giving a budget status in our "off" year. And guess what? This year, department budget presentations again happen during GUADEC.

During GUADEC, I'll be making our county IT budget proposal. This is our one opportunity to share with the Board our budget priorities for the next two years, and to defend any budget adjustment. I can't miss this meeting.

Wednesday, May 17, 2017

GNOME and Debian usability testing

Intrigeri emailed me to share that "During the Contribute your skills to Debian event that took place in Paris last week-end, we conducted a usability testing session" of GNOME 3.22 and Debian 9. They have posted their usability test results at Intrigeri's blog: "GNOME and Debian usability testing, May 2017." The results are very interesting and I encourage you to read them!

There's nothing like watching real people do real tasks with your software. You can learn a lot about how people interact with the software, what paths they take to accomplish goals, where they find the software easy to use, and where they get frustrated. Normally we do usability testing with scenario tasks, presented one at a time. But in this usability test, they asked testers to complete a series of "missions." Each "mission" was a set of two of more goals. For example:

Mission A.1 — Download and rename file in Nautilus

  1. Download a file from the web, a PDF document for example.
  2. Open the folder in which the file has been downloaded.
  3. Rename the dowloaded file to SUCCESS.pdf.
  4. Toggle the browser window to full screen.
  5. Open the file SUCCESS.pdf.
  6. Go back to the File manager.
  7. Close the file SUCCESS.pdf.

Mission A.2 — Manipulate folders in Nautilus

  1. Create a new folder named cats in your user directory.
  2. Create a new folder named to do in your user directory.
  3. Move the cats folder to the to do folder.
  4. Delete the cats folder.

These "missions" take the place of scenario tasks. My suggestion to the usability testing team would be to add a brief context that "sets the stage" for each "mission." In my experience, that helps testers get settled into the task. This may have been part of the introduction they used for the overall usability test, but generally I like to see a brief context for each scenario task.

The usability test results also includes a heat map, to help identify any problem areas. I've talked about the Heat Map Method before (see also “It’s about the user: Applying usability in open source software.” Jim Hall. Linux Journal, print, December 2013). The heat map shows your usability test results in a neat grid, coded by different colors that represent increasing difficulty:

  • Green if the tester didn't have any problems completing the task.
  • Yellow if the tester encountered a few problems, but generally it was pretty smooth.
  • Orange if the tester experienced some difficulty in completing the task.
  • Red if the tester had a really hard time with the task.
  • Black if the task was too difficult and the tester gave up.

The colors borrow from the familiar green-yellow-red color scheme used in traffic signals, and which most people can associate with easy-medium-hard. The colors also suggest greater levels of "heat," from green (easy) to red (very hard) and black (too hard).

To build a heat map, arrange your usability test scenario tasks in rows, and your testers in columns. This provides a colorful grid. You can look across rows and look for "hot" rows (lots of black, red and orange) and "cool" rows (lots of green, with some yellow). Focus on the hot rows; these are where testers struggled the most.


Intrigeri's heat map suggests some issues with B1 (install and remove a package), C2 (temporary files) and C3 (change default video player). There's some difficulty with A3 (create a bookmark in Nautilus) and C4 (add and remove world clocks), but these seem secondary. Certainly these are issues to address, but the results suggest to focus on B1, C2 and C3 first.

For more, including observations and discussion, go read Intrigeri's article.

Saturday, May 6, 2017

Not running for Board this year

After some serious thinking, I've decided not to run for the GNOME Foundation Board of Directors for the 2017-18 session.

As the other directors are aware, I've over-committed myself. I think I did a good job keeping up with GNOME Board issues, but it was sometimes a real stretch. And due to some budget and planning items happening at work, I've been busier in 2017 than I planned. I've missed a few Board meetings due to meeting conflicts or other issues.

It's not fair to GNOME for me to continue to be on the Board if I'm going to be this busy. So I've decided to not run again this year, and let someone with more time to take my seat.

However, I do plan to continue as director for the rest of the 2016-17 session.

Thursday, May 4, 2017

How I found Linux

Growing up through the 1980s and 1990s, I was always into computers. As I entered university in the early 1990s, I was a huge DOS nerd. Then I discovered Linux, a powerful Unix system that I could run on my home computer. And I have been a Linux user ever since.

I wrote my story for OpenSource.com, about How I got started with Linux.

In the article, I also talk about how I've deployed Linux in every organization where I've worked. I'm a CIO in local government now, and while we have yet to install Linux in the year since I've arrived, I have no doubt that we will someday.

Tuesday, April 18, 2017

A better March Madness script?

Last year, I wrote an article for Linux Journal describing how to create a Bash script to build your NCAA "March Madness" brackets. I don't really follow basketball, but I have friends that do, so by filling out a bracket at least I can have a stake in the games.

Since then, I realized my script had a bug that prevented any rank 16 team from winning over a rank 1 team. So this year, I wrote another article for Linux Journal with an improved Bash script to build a better NCAA "March Madness" bracket. In brief, the updated script builds a custom random "die roll" based on the relative strength of each team. My "predictions" this year are included in the Linux Journal article.

Since the games are now over, I figured this was a great time to see how my bracket performed. If you followed the games, you know that there were a lot of upsets this year. No one really predicted the final two teams for the championship. So maybe I shouldn't be too surprised if my brackets didn't do well either. Next year might be a better comparison.

In the first round of the NCAA March Madness, you start with teams 1–16 in four regions, so that's 64 teams that compete in 32 games. In that "round of 64," my shell script correctly predicted 21 outcomes. That's not a bad start.

March Madness is single-elimination, so for the second round, you have 32 teams competing in 16 games. My shell script correctly guessed 7 of those games. So just under half were predicted correctly. Not great, but not bad.

In the third round, my brackets suffered. This is the "Sweet Sixteen" where 16 teams compete in 8 games, but my script only predicted 2 of those games.

And in the fourth round, the "Elite Eight" round, my script didn't predict any of the winners. And that wrapped up my brackets.

Following the standard method for how to score "March Madness" brackets, each round has 320 possible points. In round one, assign 10 points for each correctly selected outcome. In round two, assign 20 points for each correct outcome. And so on, double the possible points at each round. From that, the math is pretty simple.

round one:21 × 10 =210
round two:7 × 20 =140
round three:1 × 40 =40
round four:0 × 80 =0
390
My total score this year is 390 points. As a comparison, last year's script (the one with the bug) scored 530 in one instance, and 490 in another instance. But remember that there were a lot of upsets in this year's games, so everyone's brackets fared poorly this year, anyway.

Maybe next year will be better.

Did you use the Bash script to help fill out your "March Madness" brackets? How did you do?

Monday, April 3, 2017

How many testers do you need?

When you start a usability test, the first question you may ask is "how many testers do I need?" The standard go-to article on this is Nielsen's "Why You Only Need to Test with 5 Users" which gives the answer right there in the title: you need five testers.

But it's important to understand why Nielsen picks five as the magic number. MeasuringU has a good explanation, but I think I can provide my own.

The core assumption is that each tester will uncover a certain amount of issues in a usability test, assuming good test design and well-crafted scenario tasks. The next tester will uncover about the same amount of usability issues, but not exactly the same issues. So there's some overlap, and some new issues too.

If you've done usability testing before, you've observed this yourself. Some testers will find certain issues, other testers will find different issues. There's overlap, but each tester is on their own journey of discovery.

How many usability issues is up for some debate. Nielsen uses his own research and asserts that a single tester can uncover about 31% of the usability issues. Again, that assumes good test design and scenario tasks. So one tester finds 31% of the issues, the next tester finds 31% but not the same 31%, and so on. With each tester, there's some overlap, but you discover some new issues too.

In his article, Nielsen describes a function to demonstrate the number of usability issues found vs the number of testers in your test, for a traditional formal usability test:
1-(1-L)n

…where L is the amount of issues one tester can uncover (Nielsen assumes L=31%) and n is the number of testers.

I encourage you to run the numbers here. A simple spreadsheet will help you see how the value changes for increasing numbers of testers. What you'll find is a curve that grows quickly then slowly approaches 100%.


Note at five testers, you have uncovered about 85% of the issues. Nielsen's curve suggests a diminishing return at higher numbers of testers. As you add testers, you'll certainly discover more usability issues, but the increment gets smaller each time. Hence Nielsen's recommendation for five testers.

Again, the reason that five is a good number is because of overlap of results. Each tester will help you identify a certain number of usability issues, given a good test design and high quality scenario tasks. The next tester will identify some of the same issues, plus a few others. And as you add testers, you'll continue to have some overlap, and continue to expand into new territory.

Let me help you visualize this. We can create a simple program to show this overlap. I wrote a Bash script to generate SVG files with varying numbers of overlapping red squares. Each red square covers about 31% of the gray background.


If you run this script, you should see output that looks something like this, for different values of n. Each image starts over; the iterations are not additive:

n=1

n=2

n=3

n=4

n=5

n=10

n=15

As you increase the number of testers, you cover more of the gray background. And you also have more overlap. The increase in coverage is quite dramatic from one to five, but compare five to fifteen. Certainly there's more coverage (and more overlap) at ten than at five, but not significantly more coverage. And the same going from ten to fifteen.

These visuals aren't meant to be an exact representation of the Nielsen iteration curve, but they do help show how adding more testers gives significant return up to a point, and then adding more testers doesn't really get you much more.

The core takeaway is that it doesn't take many testers to get results that are "good enough" to improve your design. The key idea is that you should do usability testing iteratively with your design process. I think every usability researcher would agree. Ellen Francik, writing for Human Factors, refers to this process as the Rapid Iterative Testing and Evaluation (RITE) method, arguing "small tests are intended to deliver design guidance in a timely way throughout development." (emphasis mine)

Don't wait until the end to do your usability tests. By then, it's probably too late to make substantive changes to your design, anyway. Instead, test your design as you go: create (or update) your design, do a usability test, tweak the design based on the results, test it again, tweak it again, and so on. After a few iterations, you will have a design that works well for most users.

Sunday, April 2, 2017

A throwback theme for gedit

This isn't exactly about usability, but I wanted to share it with you anyway.

I've been involved in a lot of open source software projects, since about 1993. You know that I'm also the founder and coordinator of the FreeDOS Project? I started that project in 1994, to write a free version of DOS that anyone could use.

DOS is an old operating system. It runs entirely in text mode. So anyone who was a DOS user "back in the day" should remember text mode and the prevalence of white-on-blue text.

For April 1, we used a new "throwback" theme on the FreeDOS website. We rendered the site using old-style DOS colors, with a monospace DOS VGA font.

Even though the redesign was meant only for a day, I sort of loved the new design. This made me nostalgic for using the DOS console: editing text in that white-on-blue, without the "distraction" of other fonts or the glare of modern black-on-white text.

So I decided to create a new theme for gedit, based on the DOS throwback theme. Here's a screenshot of gedit editing a Bash script, and editing the XML theme file itself:



The theme uses the same sixteen color palette from DOS. You can find the explanation of  why DOS has sixteen colors at the FreeDOS blog. I find the white-on-blue text to be calming, and easy on the eyes.

Of course, to make this a true callback to earlier days of computing, I used a custom font. On my computer, I used Mateusz Viste's DOSEGA font. Mateusz created this font by redrawing each glyph in Fontforge, using the original DOS CPI files as a model. I think it's really easy to read. (Download DOSEGA here: dosega.zip)

Want to create this on your own system? Here's the XML source to the theme file. Save this in ~/.local/share/gtksourceview-3.0/styles/dosedit.xml and gedit should find it as a new theme.
<?xml version="1.0" encoding="UTF-8"?>
<!--
  reference: https://developer.gnome.org/gtksourceview/stable/style-reference.html
-->
<style-scheme id="dos-edit" name="DOS Edit" version="1.0">
<author>Jim Hall</author>
<description>Color scheme using DOS Edit color palette</description>
<!--
  Emulate colors used in a DOS Editor. For best results, use a monospaced font
  like DOSEGA.
-->

<!-- Color Palette -->

<color name="black"           value="#000"/>
<color name="blue"            value="#00A"/>
<color name="green"           value="#0A0"/>
<color name="cyan"            value="#0AA"/>
<color name="red"             value="#A00"/>
<color name="magenta"         value="#A0A"/>
<color name="brown"           value="#A50"/>
<color name="white"           value="#AAA"/>
<color name="brightblack"     value="#555"/>
<color name="brightblue"      value="#55F"/>
<color name="brightgreen"     value="#5F5"/>
<color name="brightcyan"      value="#5FF"/>
<color name="brightred"       value="#F55"/>
<color name="brightmagenta"   value="#F5F"/>
<color name="brightyellow"    value="#FF5"/>
<color name="brightwhite"     value="#FFF"/>

<!-- Settings -->

<style name="text"                 foreground="white" background="blue"/>
<style name="selection"            foreground="blue" background="white"/>
<style name="selection-unfocused"  foreground="black" background="white"/>

<style name="cursor"               foreground="brown"/>
<style name="secondary-cursor"     foreground="magenta"/>

<style name="current-line"         background="black"/>
<style name="line-numbers"         foreground="black" background="white"/>
<style name="current-line-number"  background="cyan"/>

<style name="bracket-match"        foreground="brightwhite" background="cyan"/>
<style name="bracket-mismatch"     foreground="brightyellow" background="red"/>

<style name="right-margin"         foreground="white" background="blue"/>
<style name="draw-spaces"          foreground="green"/>
<style name="background-pattern"   background="black"/>

<!-- Extra Settings -->

<style name="def:base-n-integer"   foreground="cyan"/>
<style name="def:boolean"          foreground="cyan"/>
<style name="def:builtin"          foreground="brightwhite"/>
<style name="def:character"        foreground="red"/>
<style name="def:comment"          foreground="green"/>
<style name="def:complex"          foreground="cyan"/>
<style name="def:constant"         foreground="cyan"/>
<style name="def:decimal"          foreground="cyan"/>
<style name="def:doc-comment"      foreground="green"/>
<style name="def:doc-comment-element" foreground="green"/>
<style name="def:error"            foreground="brightwhite" background="red"/>
<style name="def:floating-point"   foreground="cyan"/>
<style name="def:function"         foreground="cyan"/>
<style name="def:heading0"         foreground="brightyellow"/>
<style name="def:heading1"         foreground="brightyellow"/>
<style name="def:heading2"         foreground="brightyellow"/>
<style name="def:heading3"         foreground="brightyellow"/>
<style name="def:heading4"         foreground="brightyellow"/>
<style name="def:heading5"         foreground="brightyellow"/>
<style name="def:heading6"         foreground="brightyellow"/>
<style name="def:identifier"       foreground="brightyellow"/>
<style name="def:keyword"          foreground="brightyellow"/>
<style name="def:net-address-in-comment" foreground="brightgreen"/>
<style name="def:note"             foreground="green"/>
<style name="def:number"           foreground="cyan"/>
<style name="def:operator"         foreground="brightwhite"/>
<style name="def:preprocessor"     foreground="brightcyan"/>
<style name="def:shebang"          foreground="brightgreen"/>
<style name="def:special-char"     foreground="brightred"/>
<style name="def:special-constant" foreground="brightred"/>
<style name="def:specials"         foreground="brightmagenta"/>
<style name="def:statement"        foreground="brightmagenta"/>
<style name="def:string"           foreground="brightred"/>
<style name="def:type"             foreground="cyan"/>
<style name="def:underlined"       foreground="brightgreen"/>
<style name="def:variable"         foreground="cyan"/>
<style name="def:warning"          foreground="brightwhite" background="brown"/>

</style-scheme>

Friday, March 31, 2017

Screencasts for usability testing

There's nothing like watching a real person use your software to finally understand the usability issues your software might have. It's hard to get that kind of feedback through surveys or other indirect methods. I find it's best to moderate a usability test with a few testers who run through a set of scenario tasks. By observing how they attempt to complete the scenario tasks, you can learn a lot about how real people use your software to do real tasks.

Armed with that information, you can tweak the user interface to make it easier to use. Through iteration (design, test, tweak, test, tweak, etc) you can quickly find a design that works well for everyone.

The simple way to moderate a usability test is to watch what the user is doing, and take notes about what they do. I recommend the "think aloud" protocol, where you ask the tester to talk about what they are doing. If you're looking for a Print button, just say "I'm looking for a Print button" so I can make note of that. And move your mouse to where you are looking, so I can see what you are doing and where you are looking. In my experience, testers adapt to this fairly quickly.

In addition to taking your own notes, you might try recording the test session. That allows you to go back to the recording later to see exactly what the tester was doing. And you can share the video with other developers in your project, so they can watch the usability test sessions.

Screencasts are surprisingly easy to do, at least under Linux. The GNOME desktop has a built-in screencast function, to capture a video of the computer's screen.

But if you're like me, you may not have known this feature existed. It's kind of hard to get to. Press Ctrl+Alt+Shift+R to start recording, then press Ctrl+Alt+Shift+R again to stop recording.

If that's hard for you to remember, there's also a GNOME Extension called EasyScreenCast that, as the name implies, makes screencasts really easy. Once you install the extension, you get a little menu that lets you start and stop recording, as well as set options. It's very straightforward. You can select a sound input, to narrate what you are dong. And you can include webcam video, for a picture-in-picture video.

Here's a sample video I recorded as part of the class that I'm teaching. I needed a way to walk students through the steps to activate Notebookbar View in LibreOffice 5.3. I also provided written steps, but there's nothing like showing rather than just explaining.



With screencasts, you can extend your usability testing. At the beginning of your session, before the tester begins the first task, start recording a screencast. Capture the audio from the laptop's microphone, too.

If you ask your tester to follow the "think aloud" protocol, the screencast will show you the mouse cursor, indicating where the tester is looking, and it will capture the audio, allowing you to hear what the tester was thinking. That provides invaluable evidence for your usability test.

I admit I haven't experimented with screencasts for usability testing yet, but I definitely want to do this the next time I mentor usability testing for Outreachy. I find a typical usability test can last upwards of forty-five minutes to an hour, depending on the scenario tasks. But if you have the disk space to hold the recording, I don't see why you couldn't use the screencast to record each tester in your usability test. Give it a try!

Monday, March 27, 2017

Testing LibreOffice 5.3 Notebookbar

I teach an online CSCI class about usability. The course is "The Usability of Open Source Software" and provides a background on free software and open source software, and uses that as a basis to teach usability. The rest of the class is a pretty standard CSCI usability class. We explore a few interesting cases in open source software as part of our discussion. And using open source software makes it really easy for the students to pick a program to study for their usability test final project.

I structured the class so that we learn about usability in the first half of the semester, then we practice usability in the second half. And now we are just past the halfway point.

Last week, my students worked on a usability test "mini-project." This is a usability test with one tester. By itself, that's not very useful. But the intention is for the students to experience what it's like to moderate their own usability test before they work on their usability test final project. In this way, the one-person usability test is intended to be a "dry run."

For the one-person usability test, every student moderates the same usability test on the same program. We are using LibreOffice 5.3 in Notebookbar View in Contextual Groups mode. (And LibreOffice released version 5.3.1 just before we started the usability test, but fortunately the user interface didn't change, at least in Notebookbar-Contextual Groups.) Students worked together to write scenario tasks for the usability test, and I selected eight of those scenario tasks.

By using the same scenario tasks on the same program, with one tester each, we can combine results to build an overall picture of LibreOffice's usability with the new user interface. Because the test was run by different moderators, this isn't statistically useful if you are writing an academic paper, and it's of questionable value as a qualitative measure. But I thought it would be interesting to share the results.

First, let's look at the scenario tasks. We started with one persona: an undergraduate student at a liberal arts university. Each student in my class contributed two use scenarios for LibreOffice 5.3, and three scenario tasks for each scenario. That gave a wide field of scenario tasks. There was quite a bit of overlap. And there was some variation on quality, with some great scenario tasks and some not-so-great scenario tasks.

I grouped the scenario tasks into themes, and selected eight scenario tasks that suited a "story" of a student working on a paper: a simple lab write-up for an Introduction to Physics class. I did minimal editing of the scenario tasks; I tried to leave them as-is. Most of the scenario tasks were of high quality. I included a few not-great scenario tasks so students could see how the quality of the scenario task can impact the quality of your results. So keep that in mind.

These are the scenario tasks we used. In addition to these tasks, students provided a sample lab report (every tester started with the same document) and a sample image. Every test was run in LibreOffice 5.3 or 5.3.1, which was already set to use Notebookbar View in Contextual Groups mode:
1. You’re writing a lab report for your Introduction to Physics class, but you need to change it to meet your professors formatting requirements. Change your text to use Times New Roman 12 pt. and center your title

2. There is a requirement of double spaced lines in MLA. The paper defaults to single spaced and needs to be adjusted. Change paper to double spaced.

3. After going through the paragraphs, you would like to add your drawn image at the top of your paper. Add the image stored at velocitydiagram.jpg to the top of the paper.

4. Proper header in the Document. Name, class, and date are needed to receive a grade for the week.

5. You've just finished a physics lab and have all of your data written out in a table in your notebook. The data measures the final velocity of a car going down a 1 meter ramp at 5, 10, 15, 20, and 25 degrees. Your professor wants your lab report to consist of a table of this data rather than hand-written notes. There’s a note in the document that says where to add the table.

[task also provided a 2×5 table of sample lab data]

6. You are reviewing your paper one last time before turning it into your professor. You notice some spelling errors which should not be in a professional paper. Correct the multiple spelling errors.

7. You want to save your notes so that you can look back on them when studying for the upcoming test. Save the document.

8. The report is all done! It is time to turn it in. However, the professor won’t accept Word documents and requires a PDF. Export the document as a PDF.
If those don't seem very groundbreaking, remember the point of the usability test "mini-project" was for the students to experience moderating their own usability test. I'd rather they make mistakes here, so they can learn from them before their final project.

Since each usability test was run with one tester, and we all used the same scenario tasks on the same version of LibreOffice, we can collate the results. I prefer to use a heat map to display the results of a usability test. The heat map doesn't replace the prose description of the usability test (what worked v what were the challenges) but the heat map does provide a quick overview that allows focused discussion of the results.

In a heat map, each scenario task is on a separate row, and each tester is in a separate column. At each cell, if the tester was able to complete the task with little or no difficulty, you add a green block. Use yellow for some difficulty, and orange for greater difficulty. If the tester really struggled to complete the task, use a red block. Use black if the task was so difficult the tester was unable to complete the task.

Here's our heat map, based on fourteen students each moderating a one-person usability test (a "dry run" test) using the same scenario tasks for LibreOffice 5.3 or 5.3.1:


A few things about this heat map:

Hot rows show you where to focus

Since scenario tasks are on rows, and testers are on columns, you read a heat map by looking across each row and looking for lots of "hot" items. Look for lots of black, red, or orange. Those are your "hot" rows. And rows that have a lot of green and maybe a little yellow are "cool" rows.

In this heat map, I'm seeing the most "hot" items in setting double space (#2), adding a table (#5) and checking spelling (#6). Maybe there's something in adding a header (#4) but this scenario task wasn't worded very well, so the problems here might be because of the scenario task.

So if I were a LibreOffice developer, and I did this usability test to examine the usability of MUFFIN, I would probably put most of my focus to make it easier to set double space, add tables, and check spelling. I wouldn't worry too much about adding an image, since that's mostly green. Same for saving, and saving as PDF.

The heat map doesn't replace prose description of themes

What's behind the "hot" rows? What were the testers trying to do, when they were working on these tasks? The heat map doesn't tell you that. The heat map isn't a replacement for prose text. Most usability results need to include a section about "What worked well" and "What needs improvement." The heat map doesn't replace that prose section. But it does help you to identify the areas that worked well vs the areas that need further refinement.

That discussion of themes is where you would identify that task 4 (Add a header) wasn't really a "hot" row. It looks interesting on the heat map, but this wasn't a problem area for LibreOffice. Instead, testers had problems understanding the scenario task. "Did the task want me to just put the text at the start of the document, or at the top of each page?" So results were inconsistent here. (That was expected, as this "dry run" test was a learning experience for my students. I intentionally included some scenario tasks that weren't great, so they would see for themselves how the quality of their scenario tasks can influence their test.)

Different versions are grouped together

LibreOffice released version 5.3.1 right before we started our usability test. Some students had already downloaded 5.3, and some ended up with 5.3.1. I didn't notice any user interface changes for the UI paths exercised by our scenario tasks, but did the new version have an impact?

I've sorted the results based on 5.3.1 off to the right. See the headers to see which columns represent LibreOffice 5.3 and which are 5.3.1. I don't see any substantial difference between them. The "hot" rows from 5.3 are still "hot" in 5.3.1, and the "cool" rows are still "cool."

You might use a similar method to compare different iterations of a user interface. As your program progresses from 1.0 to 1.1 to 1.2, etc, you can compare the same scenario tasks by organizing your data in this way.

You could also group different testers together

The heat map also lets you discuss testers. What happened with tester #7? There's a lot of orange and yellow in that column, even for tasks (rows) that fared well with other testers. In this case, the interview revealed that tester was having a bad day, and came into the test feeling "grumpy" and likely was impatient about any problems encountered in the test.

You can use these columns to your advantage. In this test, all testers were drawn from the same demographic: a university student around 18-22 years old, who had some to "moderate" experience with Word or Google Docs, but not LibreOffice.

But if your usability test intentionally included a variety of experience levels (a group of "beginner" users, "moderate" users, and "experienced" users) you might group these columns appropriately in the heat map. So rather than grouping by version (as above) you could have one set of columns for "beginner" testers, another set of columns for "moderate" testers and a third group for "experienced" testers.

Tuesday, March 21, 2017

LibreOffice 5.3.1 is out

Last week, LibreOffice released version 5.3.1. This seems to be an incremental release over 5.3 and doesn't seem to change the new user interface in any noticeable way.

This is both good and bad news for me. As you know, I have been experimenting with LibreOffice 5.3 since LibreOffice updated the user interface. Version 5.3 introduced the "MUFFIN" interface. MUFFIN stands for My User Friendly Flexible INterface. Because someone clearly wanted that acronym to spell "MUFFIN." The new interface is still experimental, so you'll need to activate it through Settings→Advanced. When you restart LibreOffice, you can use the View menu to change modes.

So on the one hand, I'm very excited for the new release!

But on the other hand, the timing is not great. Next week would have been better. Clearly, LibreOffice did not have my interests in mind when they made this release.

You see, I teach an online CSCI class about the Usability of Open Source Software. Really, it's just a standard CSCI usability class. The topic is open source software because there are some interesting usability cases there that bear discussion. And it allows students to pick their own favorite open source software project that they use in a real usability test for their final project.

This week, we are doing a usability test "mini-project." This is a "dry run" for the students to do their own usability test for the first time. Each student is doing the test with one participant each, but using the same program. We're testing the new user interface in LibreOffice 5.3, using Notebookbar in Contexttual Groups mode.

So we did all this work to prep for the usability test "mini-project" using LibreOffice 5.3, only for the project to release version 5.3.1 right before we do the test. So that's great timing, there.

But I kid. And the new version 5.3.1 seems to have the same user interface path in Notebookbar-Contextual Groups. So our test should bear the same results in 5.3 or 5.3.1.

This is an undergraduate class project, and will not generate statistically significant results like a formal usability test in academic research. But the results of our test may be useful, nonetheless. I'll share an overview of our results next week.

Saturday, March 18, 2017

Will miss GUADEC 2017

Registration is now open for GUADEC 2017! This year, the GNOME Users And Developers European Conference (GUADEC) will be hosted in beautiful Manchester, UK between 28th July and 2nd August.

Unfortunately, I can't make it.

I work in local government, and just like last year, GUADEC falls during our budget time at the county. Our county budget is on a biennium. That means during an "on" year, we make our budget proposals for the next two years. In the "off" year, we share a budget status.

I missed GUADEC last year because I was giving a budget status in our "off" year. And guess what? This year, department budget presentations again happen during GUADEC.

During GUADEC, I'll be making our budget proposal for IT. This is our one opportunity to share with the Board our budget priorities for the next two years, and to defend any budget adjustment. I can't miss this meeting.

Friday, March 17, 2017

Learn Linux

Recently, a student asked me about career options after graduation. This person was interested in options that involved open source software, other than a "developer" position. Because this blog is about open source software, I thought you might be interested in an excerpt of my recommendation:
These days, if you want to get a job as a systems administrator, I recommend learning Linux. Linux administrators are in high demand in pretty much any metro area. In the Twin Cities metro area, it's hard not to have a job if you know Linux.

Red Hat is the most popular Linux distribution for the enterprise. So if you learn one Linux system, learn Red Hat Linux. While it's okay to use GUI tools to manage Linux, you should know how to maintain Red Hat Linux without a GUI. Because when you're running a server, you won't have a GUI.

A good way to learn this is to install Linux (Red Hat Linux, but Fedora can be "close enough") and run it in runlevel 3. Set up a cheap PC in your house and use it as an NFS and CIFS file server. Install a web server on it. Set up DNS on it. Learn how to edit config files at the command line. How to partition, format, and mount a disk (even a USB flash drive) from the command line. How to install packages from the command line. Learn how to write Bash scripts to automate things.

If you want a book, try Linux Systems Administration by O'Reilly Press. For formal training, I recommend Red Hat Sys Admin I and Red Hat Sys Admin II.
My advice mirrors my own background. My undergraduate degree was in physics, with a major in mathematics, yet I managed a successful career in IT. For me, it was about following my interests, and doing what I enjoyed doing. There's nothing like getting paid to do something you wanted to do anyway.

When I got my first job, I was a Unix systems administrator for a small geographics company. We used a very old Unix system called Apollo AEGIS/DomainOS, but had a few HP-UX and SunOS systems. I introduced a few Linux systems to do back-office work like DNS, YP, LPD, etc. My second job was a working manager for a document management company, and while we were mostly AIX, HP-UX and Novell, I installed Linux to run our core "backoffice" services (DNS, file, web, etc). At my third job (working manager at a Big-Ten University) we ran a mix of systems, including AIX and Solaris, but I replaced some of our "Big Iron" AIX systems with Linux to save our web registration system.

From there, was promoted into larger roles and greater responsibility, so left my systems administration roots behind. At least, professionally. I still run Linux at home, and I maintain my own Linux server to run a few personal websites.

But I still remember my start as a Unix and Linux systems administrator. I was fortunate that my first boss took a chance on me, and let me learn on the job. That first boss was a supportive mentor, and helped me understand the importance of learning the ropes, of understanding how to do something at the command line before you can use the "shortcut" of a GUI tool. So I encourage others to do the same. Yes, modern Linux has lots of GUI tools to make things easy. But it's better to know how to do it "the old fashioned way" as well.

Saturday, March 4, 2017

Open source branding

I recently discovered this 2016 article from Opensource.com about branding in open source software. The article encourages projects to kill off extra brand names to help your project get recognized.

The article describes the issue in more detail, but here's a summary:
Let's say you are the maintainer for an open source software project, which I'll make up. Let's call it the Wibbler Framework.

Maybe this is a website builder. Developers can use Wibbler to create awesome, dynamic websites really easily. Wibbler is based around different modules that you can load on your website to do different things.

One day, you get a great idea for a new chart display component. So you spend the weekend writing a module that provides a super simple way for websites to display data in different formats.

You think the new module is pretty cool, so you give it a name: ChartZen.
Pretty realistic scenario, right?

But the problem is this: you've added an extra "brand" to your project. When new users find your project, they are confused. Is it Wibbler, or is it ChartZen? How does ChartZen connect to Wibbler? Do I need to get Wibbler if I just want to use ChartZen? Do I need to run two different servers, one to run Wibbler and another for ChartZen?

By adding the second name, you've confused the original project.

It gets worse if you continue to add new "branding" to every new module. If you continue to add new names for every new module, you end you with a confusing forest of competing "brands": Wibbler, ChartZen, AwesomeEdit, FontForce, FormsNirvana, DBConnSupreme, and Pagepaint.

It's better to just name each component after the main Wibbler project. Keep the Wibbler brand intact. ChartZen becomes "Wibbler Charts," which is easier to remember anyway. And it's immediately clear what "Wibbler Charts" does: it's a component for Wibbler that makes charts. Similarly, you also have "Wibbler Editor," "Wibbler Font Picker," "Wibbler Forms," "Wibbler Database Connector," and "Wibbler Page Designer."

How do you manage your open source software project's "brand"? Do you have different names for different components? Or do you maintain one core "brand" that everything connects to?

Sunday, February 26, 2017

How to get started in open source software

A friend pointed me to the Open Source Guides website, a collection of resources for individuals, communities, and companies who want to learn how to run and contribute to an open source project. I thought it was very interesting for new contributors, so I thought I'd share it here.

The website provides lots of information about starting or joining an open source project. There's lots to read, but I hope that doesn't seem like too much for people interested in getting started in open source software. Open Source Guides has several sections for new developers:
  1. How to Contribute to Open Source
  2. Starting an Open Source Pro ject
  3. Finding Users For Your Project
  4. Building Welcoming Communities
  5. Best Practices for Maintainers
  6. Leadership and Governance
  7. Getting Paid for Open Source Work
  8. Your Code of Conduct
  9. Open Source Metrics
  10. The Legal Side of Open Source
I'm not connected with Open Source Guides, but I think it's a great idea!

Of course, there are other ways to learn about open source software, how to get involved and how to start your own project. Eric Raymond's essay series The Cathedral and the Bazaar are the typical go-to examples about open source software. Along similar lines, Opensource.com has a very brief primer on open source contributor guidelines. And there are countless other articles and books I could mention here, including a few articles written by me.

But I'm interested in the open source software community, and anything that helps new folks get started in open source software will help the community grow. So if you're interested in getting involved in open source software, I encourage you to read the Open Source Guides.

Friday, February 24, 2017

Top open source projects

TechRadar recently posted an article about "The best open source software 2017" where they list a few of their favorite open source software projects. It's really hard for an open source software project to become popular if it has poor usability—so I thought I'd add a few quick comments of my own about each.

Here you go:

The best open source office software: LibreOffice


LibreOffice hasn't changed its user interface very substantially for a very long time. In the recent LibreOffice 5.3 release, they introduced a new interface option, which they call the MUFFIN (My User Friendly Flexible INterface).

The new interface has several modes, including Single Toolbar, Sidebar, and Notebookbar. The last mode, Notebookbar, is interesting. This is very similar in concept to the Microsoft Office Ribbon. People who come from an Office background and are used to how Ribbon behaves, and how it changes based on what you are working on, should like the Notebookbar setting.

To comment on the new interface: I think this is an interesting and welcome direction for LibreOffice. I don't think the current user interface is bad, but I think the proposed changes are a positive step forward. The new MUFFIN interface is flexible and supports users they way they want to use LibreOffice. I think it will appeal to current and new users, and "lower the bar" for users to come to LibreOffice from Microsoft Office.

The best open source photo editor: GIMP


I use GIMP at home for a lot of projects. Most often, I use GIMP to create and edit images for my websites, including the FreeDOS Project website. Although we've recently turned to SVG where possible on the FreeDOS website, for years all our graphics were made in the GIMP.

A few years ago, I asked readers to suggest programs that have good usability (I also solicited feedback through colleagues via their blogs and their readers). Many people talked about GIMP, the open source graphics program (very similar to Photoshop). There were some strong statements on either side: About half said it had good usability, and about half said it had bad usability.

In following up, it seemed that two types of users thought GIMP had poor usability: Those who used Photoshop a lot, such as professional graphics editors or photographers. Those who never used Photoshop, and only tried GIMP because they needed a graphics program.

So GIMP is an interesting case. It's an example of mimicking another program perhaps too well, but (necessarily) not perfectly. GIMP has good usability if you have used Photoshop occasionally, but not if you are an expert in Photoshop, and not if you are a complete Photoshop novice.

The best open source media player: VLC


I haven't used VLC, part of the VideoLAN project, in a few years. I just don't watch movies on my computer. But looking at the screenshots I see today, I can see VLC has made major strides in ease of use.

The menus seem obvious, and the buttons are plain and simple. There isn't much decoration to the application (it doesn't need it) yet it seems polished. Good job!

The best open source video editor: Shotcut


This is a new project for me. I have recorded a few YouTube videos for my private channel, but they're all very simple: just me doing a demo of something (usually related to FreeDOS, such as how to install FreeDOS.) Because my videos aren't very complicated, I just use the YouTube editor to "trim" the start and end of my videos.

Shotcut seems quite complicated to me, at first glance. Even TechRadar seems to agree, commenting "It might look a little stark at first, but add some of the optional toolbars and you'll soon have its most powerful and useful features your your fingertips."

I'm probably not the right audience for Shotcut. Video is just not my interest area. And it's okay for a project to target a particular audience, if they are well suited to that audience.

The best open source audio editor: Audacity


I used Audacity many years ago, probably when it was still a young project. But even then, I remember Audacity as being fairly straightforward to use. For someone (like me) who occasionally wanted to edit a sound file, Audacity was easy to learn on my own. And the next time I used Audacity, perhaps weeks later, I quickly remembered the path to the features I needed.

Those two features (Learnability and Memorability) are two important features of good usability. We learn about this topic when I teach my online class about usability. The five key characteristics of Usability are: Learnability, Efficiency, Memorability, Error Rates, and Satisfaction. Although that last one is getting close to "User eXperience" ("UX") which is not the same as Usability.

The best open source web browser: Firefox


Firefox is an old web browser, but still feels fresh. I use Firefox on an almost daily basis (when I don't use Firefox, I'm usually in Google Chrome.)

I did usability testing on Firefox a few years ago, and found it does well in several areas:

Familiarity: Firefox tries to blend well with other applications on the same operating system. If you're using Linux, Firefox looks like other Linux applications. When you're on a Mac, Firefox looks like a Mac application. This is important, because UI lessons that you learn in one application will carry over to Firefox on the same platform.

Consistency: Features within Firefox are accessed in a similar way and perform in a similar way, so you aren't left feeling like the program is a mash of different coders.

Obviousness: When an action produces an obvious result, or clearly indicated success, users feel comfortable because they understand what the program is doing. They can see the result of their actions.

The best open source email client: Thunderbird


Maybe I shouldn't say this, but I haven't used a desktop email client in several years. I now use Gmail exclusively.

However, the last desktop email client I used was definitely Thunderbird. And I remember it being very nice. Sometimes I explored other desktop email programs like GNOME Evolution or Balsa, but I always came back to Thunderbird.

Like Firefox, Thunderbird integrated well into whatever system you use. Its features are self-consistent, and actions produce obvious results. This shouldn't be surprising, though. Thunderbird is originally a Mozilla project.

The best open source password manager: KeePass


Passwords are the keys to our digital lives. So much of what we do today is done online, via a multitude of accounts. With all those accounts, it can be tempting to re-use passwords across websites, but that's really bad for security; if a hacker gets your password on one website, they can use it to get your private information from other websites. To practice good security, you should use a different password on every website you use. And for that, you need a way to store and manage those passwords.

KeePass is an outstanding password manager. There are several password managers to choose from, but KeePass has been around a long time and is really solid. With KeePass, it's really easy to create new entries in the database, group similar entries together (email, shopping, social, etc.) and assign icons to them. And a key feature is generating random passwords. KeePass lets you create passwords of different lengths and complexity, and provides a helpful visual color guide (red, yellow, green) to suggest how "secure" your password is likely to be.