RSS Feed

Teaching Open Source Planet

Teaching Open Source Planet is a Planet, a collection of personal blogs by Teaching Open Source community members working to bring the open source way into academia. We write about our inspirations and experiences in learning, teaching, and collaborating within free and open communities. In the spirit of freedom, we share and criticize in order to collectively improve. We hope you enjoy reading our thoughts; if you’re fascinated by what you see, consider adding your voice to the conversation.

Open Learning Analytics

Here is something about Open Learning Analytics:

Using Heutagogy to Address the Needs of Online Learners (Distance Learning)

We know about pedagogy.  We have also come across the term "andragogy" as propounded by Malcolm Knowles.  What about Heutagogy?

Heutagogy is all about determined self-learning.  It is highly applicable to online learners.

Here is an interesting article on how heutagogy can be used to address the needs on online learners:

Instead of a MOOC, How About a SOOC?

MOOC stands for Massive Open and Online Course
What about SOOC?
Well, it looks like someone has come out with the term "Small Open and Online Course" for SOCC.
You can read all about this at this website:

General Update

So, turns out that whole compilation/posting of our completed ticket wasn’t as ‘complete’ as we’d thought. That is to say, rather than compile everything and post it to GitHub, we ended up trying to compile all our individual code first….repeatedly. It took a bit of playing around with, but we figured out that it could all be compiled and pushed up through the GitHub GUI. For whatever reason, whenever we tried it on the Bash, it just wouldn’t work properly.

After finally getting it sent up, we still hadn’t heard back from the admins about whether we’d properly fulfilled the requirements, so we’re not sure where to go with all of this. As a group, we pretty much decided to focus on the book reports that would be due soon, and just wait for confirmation. As I’ve still not heard from Dhimitris (who originally claimed the ticket), all I’ve been doing is reading more from my book.

Frankly, I was right. Reading glorified textbooks isn’t that fun. I prefer sci-fi/fantasy to non-fiction :P

If all bugs are shallow, why was Heatbleed only just fixed?

This week the Internet’s been ablaze with news of another security flaw in a widely used open source project, this time a bug in OpenSSL dubbed “Heartbleed”.

This is the third high-profile security issue in as many months. In each case the code was not only open but being used by thousands of people including some of the world’s largest technology companies, and had been in place for a significant length of time.

heartbleedIn his 1999 essay The Cathedral and The Bazaar, Eric Raymond stated that “Given enough eyeballs, all bugs are shallow.”  If this rule (dubbed Linus’s Law by Raymond, for Linus Torvalds) still holds true, then how can these flaws exist for such a long time before being fixed?

Let’s start by looking at what we mean by a “bug”.  Generally speaking the term “bug” refers to any defect in the code of a program, whereby it doesn’t function as required.  That definition certainly applies in all these cases, but for a bug to be reported, it has to affect people in a noticeable way.  The particular variety of bugs we’re talking about here, security flaws in encryption libraries, don’t affect the general population of users, even where we’re talking about users on the scale we see here.  It’s only when people try and use the software in a way other than it’s intended, or specifically audit code looking for such issues, that these bugs become apparent.

When Raymond talks about bugs being shallow, he’s really talking about how many people looking at a given problem will find the cause and solution more quickly than one person looking at that problem.  In the essay, Raymond quotes Torvalds saying “I’ll go on record as saying that finding [the bug] is the bigger challenge.”

So the problem we’ve been seeing here isn’t the the bugs took a long time to diagnose and fix, instead it’s that their lack of impact on the intended use of the software means they’ve taken a long time to be noticed.  Linus’s Law still holds true, but it’s not a panacea for security. The recent events affirm that neither open or closed code is inherently more secure.

For more about security in open source software, check out our briefing on the subject.

Software Freedom and Frontline Services

There are many reasons why organisations choose open source software. Sometimes Total Cost of Ownership (TCO) is a key factor. Sometimes the FOSS offering is the market leader in its niche (such as Hadoop for Big Data, or WordPress for blogs). Sometimes its the particular features and qualities of the software.

However, one of the other key characteristics of FOSS is freedom, and this is emerging as an important consideration when using software to deliver key user-facing services.

"If you want to achieve greatness, stop asking for permission"

At the Open Source Open Standards conference in London on April 3rd, James Stewart from the UK Government Digital Service (GDS) stressed the importance of the permissionless character of FOSS. This enables GDS to modify government frontline services whenever there is need, or whenever an improvement can be identified. This means that services can innovate and improve continually without requiring cycles of negotiation with suppliers.

Businesses are also identifying the importance of “delivering constant value and incremental improvement” that can be delivered through participation in FOSS projects, and through permissionless modification.

While it may seem at odds with typical procurement and deployment practices, where software is used for delivering key services to customers, organisations can choose to devote resources to continual innovation and improvement (using agile processes and continuous testing) rather than more traditional models of sparse planned service upgrades. This can make the difference in crowded markets; or for public sector, react to public demand. With FOSS, continual service improvement processes can be implemented in an agile manner.

Free and Open Source Software is an enabler of permissionless innovation, so when evaluating software for frontline services, bear this in mind.

Image by Thomas Hawk used under CC-BY-NC. Please note that the NC clause may be incompatible with reuse of the text of this blog post, which is CC-BY-SA. For avoidance of doubt, syndicate this article using another image.

North America Mozilla Reps Meetup

DSC 0213 300x200 North America Mozilla Reps Meetup

Our group photo

This weekend, North America Mozilla Reps gathered in the not-so-sunny Portland, Oregon. We worked from the Portland Office during the weekend, where we collaborated on plans for North America for the next six month period. We also tackled a number of topics from websites and refined our priority cities which will help us be more successful in moving forward in our mission to grow contributors in North America.

We were very fortunate to have some new people participate this time round including Lukas Blakk, Janet Swisher, Larissa Shapiro, Joanna Mazgaj, Robby Sayles, Prashish Rajbhandari, Tanner Filip, Dan Gherman and Christie Koehler. It was excellent to have a larger group because this brought ideas from people who see things through different lenses.

14 1 300x221 North America Mozilla Reps Meetup

Voodoo Donuts delivered Firefox Donuts 2.0

All in all, I feel we tackled a lot more work this time than our previous meetup last year in San Francisco and we decided to have our next meetup in Portland again. One of my favorite activities during the meetup was a diversity activity that Lukas led us in that many of us hope to do with our own communities.

We closed off the meetup with a trip to the Ground Kontrol Arcade and Bar where there were many games of Pac Man and Dance Dance Revolution.

 North America Mozilla Reps Meetup

Taking Care of Backlog…

So, apparently, I’m really bad at keeping these sorts of things regular. So, let’s start off with two weeks ago, shall we?

Two weeks ago, we did a sort-of lab in class, one that involved downloading various applications and such before it could really get going; well, it involved all that for me, seeing as I was using vastly outdated software/programs. In order to get things working, I had to download the latest version of Eclipse available (Helios, I believe?), as well as a better version of Java in order to properly run everything. I tried to do this all in class, but for whatever reason my internet connection in the classroom seems especially bad. It was at the point that the estemated completion time was several hours, at the lowest. I think the highest I saw was about 10 hours.

Either way, after I got home, everything seemed to work out well enough. The downloads finished rather quickly, and I soon was running the proper software. Unfortunately, everything wasn’t perfect at this point, and thus I had to play around with the Eclipse settings for a bit. Now, while I really should have been typing this up as I did it, so I could remember everything I did, I was an idiot, and failed to do so. So, let’s see how much I can remember, and how much I can re-work through…

It still wasn’t letting me run files as JUnit Tests, so I had to go into the settings and preferences and such to find out why this thing still wasn’t working. Come to find, in spite of downloading the latest of Java and Eclipse, it was still trying to use an old system library, 1.6 I believe it was. I had to go in, forcibly re-direct which Build Path it was using, then I had to force it to run as JUnit Test (unfortunately, I can’t quite remember what I ended up having to do to get that to work properly, but it still didn’t want to run it).

After that fiasco was over and done with, our group finished up our ticket and got ready to compile it all together and send it. I’m not sure if we decided on a new ticket or anything, yet, but I suppose I’ll find out later today in class. In the meantime, I’ve been taking a look at that book I’m apparantly supposed to give a report on. Not gonna lie, I’m not looking foreward to reading what looks to amount to a specialized textbook. I generally shunt those off to be used as references, at best. And even then, the Internet’s generally faster for me. Well, here’s hoping this doesn’t turn out as painful as it’s starting to sound.


Samuel and I left Mowe for Ikire at the end of the RUN’s (Redeemer’s University, Nigeria) POSSE. While we enjoyed viewing the countryside, it was a depressing journey. The Lagos-Ibadan expressway is always undergoing repair and so was it during our trip. We were in a terrible traffic for hours; and at some point, I decided that we continue with the last leg of the trip (Ibadan-Ikire) the following morning.

POSSENG@UniOsun was also a huge success, though we had only a day to interact with the participants. We succeeded in summarizing all that were thought at RUN in the last two days.There was suffice time for coding, demonstrations and Q&A.

In all honesty, we met another set of engaging students. I remember a student asked me a thoughtful question – most programming languages are written in English Language and so are applications built using the programming languages. As a linguistic student, how can one change the language used in a program/application?

I understood the question; he was not only asking about localizing the programs but also internationalizing them (most notably their encoding). He had asked me if the statements, instructions, e.t.c. in a code can be changed from English Language to a different language (with their diacritical marks) and still get the application run. I answered the question and gave some pointers on how he can take it up as a research work.

One of the reasons of running POSSENG in two venues was that we wanted to ease the transportation and accommodation problems/costs for the participants. And it worked. At Ikire, we had academics and students, who had come from Obafemi Awolowo University, Ile-Ife as well as some Mozilla reps.





This event marks the end of POSSE Nigeria. Many thanks to the sponsor – Mozilla Foundation and the hosts – RUN and UniOsun. Pictures from the Mozilla Club, UniOsun

POSSE Lives!!!

I’ve been struck by how much I’ve been participating in open source communities in the past few months.  My research in the area of student involvement in HFOSS draws my focus to open source, but this year has brought new and fun activities.

I’m super excited to be starting another POSSE!!   We are hoping to expand the community of instructors involving  students in open source software via the TeachingOpenSource community. The online portion starts this week and the face-to-face meeting is May 28-30. Folks start the online activities this week and we’re looking forward to learning how student involvement in HFOSS can work in some new academic institutions.

I’ve also been exploring new FOSS venues in the past few weeks. On March 9th I went to an “Open Source Comes To Campus” event hosted by Open Hatch’s Shauna Gordon-McKeon.  Karl Wurst and I worked with students from UMass Amherst and Mt. Holyoke to learn about FOSS and to hack on the MouseTrap project. Karl and I are hoping to bring OSCTC to either Worcester State or WNE at some point.

And two weeks ago I went to LibrePlanet 2014. Friday evening included a visit to the Free Software Foundation offices where I got a peek at the 3D printer that was being raffled off and met some of the folks from the FSF. On Saturday I met Karen Sandler and got to thank her in person for providing an impromptu tutorial on licensing to my students at the Montreal Gnome Summit. I also found it very interesting to hear from people from a wide variety of disciplines (e.g., art, theater) and how they’re using, developing, and promoting free software.

And now I’m looking forward to the Hackfest at CCSCNE 2014.  Karl, Stoney Jackson, and I are hosting a hack on the GNOME MouseTrap project on Friday April 25th. The CCSCNE conference hosts a programming contest that draws around 75-100 students. Many of these student stay for some or all of the conference and we are hoping that they’ll be interested in hacking when the programming contest is over. Pizza will be had by all!


The assignment that was given to the students was evaluated and used to select who will join us for the second day of POSSENG. It was a simple assignment and was meant to determine how committed the attendees were to the workshop. They had been asked to create a blog and write their first day experience as their (first) blogpost. Here are some of the blogposts from the attendees – ; ; ; ; .

After deciding who will be joining us for the Day 2, we began the day by exploring some community projects that can be of interest to the team. One of such projects is software localization. I spent the next three hours showing them how to do online and offline translations using Pootle in Mozilla, transifex in Fedora and Virtaal. In addition, they signed up on the Mozilla localization website, and for some of them, their  accounts were upgraded so hat they can review translations, download compressed locale files and view other projects. A language pack later was built for the translation into Yoruba Language and it was tested.

At some time, the students wanted to see how remote debugging works and I got some folks (Yoric and gerard-majax) on the Mozilla #b2g channel to demonstrate it. I had earlier shown the participants some of the testing and debugging tools used for the B2g/FirefoxOS project.



One of the common challenges for the academics there is that they do not know how to realize their various projects/research works. Hence, we changed focus from debugging to applied research. After carefully listening to the academics, I realized I need to show them how to build a project from source and hack it in order to achieve their goals. Some of the projects that were suggested are Weka, Winbugs (also known as Openbugs) and NS3. For them, they wanted to enhance the application(s) in order to take measurements/readings in a new scale and/or units. They would like to implement such work(s), which is most times the core of their postgraduate works, and release the work to the public domain. In order to increase their level of confidence, I demonstrated how to build Weka and Winbugs from source. We pulled their source code and built them up. I later showed them what files make up the applications and how to hack a typical application.

The closing of POSSE NG in RUN was done by the HoDs of the Computer Science and Statistical Mathematics Departments. Like yesterday, it also lasted for eight hours. Many thanks to Samuel for his support. Although he could not assume the role of a POSSE instructor, his technical support was highly appreciated by all the participants. And Samuel and I later headed for the University of Osun for the third day of POSSE NG.


The management of RUN (Redeemer’s University, Nigeria) attended the opening event. The Vice Chancellor (VC) was going to open the event; unfortunately, he could not make it (owing to other engagements). There were however many other executive staff members (e.g. Director of IT, Head of Library, HoDs of the Computer Science, Mathematics and Statistics, e.t.c.) and one of them stood in for the VC.





There were over 30 participants; they were a mix of academics in and outside RUN (e.g. there were academics from the Ajayi Crowther University, University of Lagos, e.t.c.), students, librarians and some technical support staff members. We got a cool venue for the event; it was one of the software laboratories in the Department of Computer Science. The opening event was brief and concise. During my opening note, I told everyone why all Mozilla contributors that I contacted would not come. It was a pity but I promised that I would be wearing the three hats – an instructor/co-ordinator, evangelist and developer. And finally, we took some photos before the executive members of the school departed.



Not long after, the POSSE NG technical session got started. It didn’t take a long time before they got productively lost. They were shown the ToS IRC channel, planet and website. The #TeachingOpenSource IRC channel and the twitter handle #POSSNIG were our choice communication tools. You can join or follow us using any of tools for updates. After taking about TOS, I explained the terms – Open Source, Open Content and Open Standard – to them. I also showed them how to get concise information about any OS project (using Ohloh and Openhatch). The websites show the number of contributors, commits and programming languages used in a project. They were also exposed to some of the OSS licenses.

Given that a week bootcamp is now condensed into two days, there was little or no break as the session lasted for 8hours. Not all the participants could stay up that long; I saw quite a number of them dozing off. There were just so many to show them; I was right, when I introduced myself by saying “…. I am interested in OS because it shows us how things should be done.” We got our hands dirty by building boot2gecko (the FirefoxOS phone Operating System), using bugzilla to file a bug and diving into the core of the B2G codebase (in order to see the various languages used in it).

It would be good to give some feedback on the participants. I was highly impressed by the enthusiasm shown by all of them let alone the three noticeable geeks that seemed to know all I was talking about. The students and their instructors were very engaging, which shown we got the right crop of attendees into POSSE NG.

Finally, there were given an assignment, which was due the following morning. And we ended the day’s session sharing gifts.



Stephen’s question: how do you tutor fellow researchers in programming?

Stephen recently asked for advice on how I’d teach programming using Python to fellow academics. Specifically, an English major and a Materials Science (Matsci) researcher — smart people deeply into their own disciplines… who happen to not have had programming experience. Here’s what I said; if you have more comments/advice for Stephen, leave them in the comments.

Showing them the fundamentals in a common way, then diverging into their disciplinary interests, sounds about right. If they’re both completely new to programming, you’ll have a little while before they really branch off.

If you want a common starter text, Think Python may be good for this specific case. Just skip the sections related to graphics (Turtle at the start, Tkinter at the end). I also enjoy (and under different circumstances, would recommend) Learn Python The Hard Way and Dive Into Python, but the former is more geared towards web development and the latter is for people who have programmed in another language before.

For your English major friend, one of the exercises gets you into Markov analysis/generation of texts. This should be a fun place to play with poetry. The Markov Bible is a hilarious example of the sorts of things that can be done, and people have written entire books on text processing with Python.

For your Matsci friend, who you said was interested in data analysis, Allen’s next book starts playing with things like census data — that exercise should start being doable around the same time as the Markov analysis, because they’re both fundamentally about “read from a file, do math, spit out to a file.” They’ll probably want to jump off into SciPy at some point to make plots and crunch more complex data.

For a fun change of pace and/or an intro session and/or while people are installing software on their machines, you can try CodingBat exercises. The Boston Python Workshop’s exercises for Friday and Saturday are a tiny manageable collection. This also gets them into the habit of test-driven development (which is also a good approach to curriculum design, although I need a new name for it because Test Driven Learning is already taken).  CodingBat problems are very basic, so this only applies for the first few sessions before it’ll get too easy for them.

I would structure your learning sessions primarily as pair programming time. They’ll learn from each other’s approaches, debug/unstick each other naturally, and learn how to cleanly structure and communicate about code. If you have them pair-program their way through the book, you can spend a chunk of your time writing your own dissertation while being on-call, as in passive pair programming.

Whatever you do, teach them fundamentals of software engineering as you go along — commenting, testing, and version control specifically. Software Carpentry has good resources for this sort of thing. With the same rationale, take the first session and have them make and use github accounts while working through their starter exercises for the sake of everybody’s sanity. This gets them in the habit of working in public, which is important if you ever want to…

…introduce them to a broader community of programmers, which you should do as quickly as possible. Whether that’s “hey folks, join this Python mailing list” or “let’s go to a local Python meetup and get you asking questions — I’ll go to your first one with you and model this introducing-self and question-asking thing in programming-land” or whatever you have around in your neck of the woods, it’s basically the act of teaching them to learn from people other than you. And then… they’re off, and they don’t need you any more to keep learning and doing what they want to do. It’s a great job, making ourselves obsolete.

Hope this helps, and good luck.

Ubuntu Users Win Back Privacy

shopping 300x162 Ubuntu Users Win Back PrivacyUbuntu users and privacy advocates have won a big victory as Canonical’s Michael Hall announced yesterday that future versions of Unity will give users the option to opt-in to searches using online sources. Back in September 2012, I had reached out to both the Electronic Frontier Foundation (EFF) and Free Software Foundation (FSF) and blogged about the new feature landing in Ubuntu 12.10 that would breach user privacy and leak desktop queries.

The EFF and FSF both responded by outlining why this new feature was a breach of user privacy and called on Canonical to fix the feature. For two releases, Canonical maintained that the online search feature was something users liked (apparently having done user studies) and that it respected user privacy.

Yesterday’s announcement clearly indicates that the feature was not something that users valued and that the feature did indeed raise privacy concerns. Later in 2013, Canonical went as far as to abuse Trademark Law by sending an employee of the Electronic Frontier Foundation a frivolous legal notice which had no validity.

For what its worth, this change in the Unity Desktop will address the issues that users, developers, and advocates have raised over the last two years and puts Ubuntu back in parity with other Linux Distros in terms of privacy.

I applaud the Electronic Frontier Foundation, Free Software Foundation, and Privacy International for championing the privacy and choice of Ubuntu Users.

 Ubuntu Users Win Back Privacy


POSSE NG will be starting in the next 24hrs and will be run by me with support from Samuel. Being the only facilitator, I have got to play the role of an Open Source & Technology Evangelist, developer and co-ordinator. I have had little time to prepare for the event since I first needed to complete any workplace-related tasks. I ensured that I brainstorm about the event on the airplane to Nigeria. The trip was an interesting one as I had a short encountered with the South African border control again. Here is the first encounter. This time around, the controller, met an educated/PAN African, who won’t present his ID book but only his passport because he doesn’t think there should be borders separating African countries.

The controller was a beautiful but ignorant lady, who had not travelled out of South Africa and never thought that Africa should be a country (in addition to being a continent like Australia). It should be a country comprising of 51 states (not countries). She’s also arrogant. She had asked me if I married a Xhosa woman like other Nigerians do in South Africa. I guess she saw my wedding ring. I never thought twice before rhetorically responding to her by saying why would I marry a South African woman. The confrontation was not going to end anytime soon and I was conscious of the long queue behind me; so, I asked her if I could leave. I later turned to drop her my business card, which I wrote on its back.

Before telling you what I wrote, I must say that I have no reservation for interracial marriages. I wholeheartedly support it; and sometime ago, I tweeted about South African women fighting their government  regarding the way they and their Nigerian husbands are being treated.. Anyway, I am going to continue talking about my experience with the controller another time. In another related development, I met some Nigerians on board, who told me they had just been deported for the various reasons.It was so mean of the SA border control to deport a Nigerian because his vaccination book does show when he got it. The day & month but not the year were written; and the vaccination is no longer administered in SA, at least the passenger(s) could have processed it there (again). That is just of the reasons that 10 or more Nigerians, where deported. I am now waiting to see how Nigeria will respond. The two countries have been in the news few months ago for deporting each other’s citizens. This is just the reason I always say Africans (most notably their governments) are myopic.

At the end of it all, I got to Lagos and needed to find my way to one of the venues for POSSE NG (Redeemer’s University, Lagos). It was a very tiring trip, which left me to head for bed at the slightest opportunity. I trust I would be full of life tomorrow for the day one of POSSE NG. I can’t wait for it.


Updated: Apr. 11, 2014 {Changed the words “his permit” to “his ID book but only his passport”}

Sponsor Debconf14

debianorig 300x256 Sponsor Debconf14Debconf14 is just around the corner and although we are making progress on securing sponsorships there is still a lot of progress that needs to be made in order to reach our goal. I’m writing this blog post to drum up some more sponsors. So if you are reading this and are a decision maker at your company or know a decision maker and are interested in supporting Debconf14, then please check out the Debconf14 Sponsor Brochure and if still interested then reach out to us at I think it goes without saying that we would love to fill some of the top sponsorship tiers.

I hope to see you in August in Portland, OR for Debconf14!


About Debconf

DebConf is the annual Debian developers meeting. An event filled with discussions, workshops and coding parties – all of them highly technical in nature. DebConf14, the 15th Debian Conference, will be held Portland, Oregon, USA, from August 23rd to 31st, 2014 at Portland State University. For more detailed logistical information about attending, including what to bring, and directions, please visit the DebConf14 wiki.
This year’s schedule of events will be exciting, productive and fun. As in previous years (final report 2013 [PDF]), DebConf14 features speakers from around the world. Past Debian Conferences have been extremely beneficial for developing key components of the Debian system, infrastructure and community. Next year will surely continue that tradition.

 Sponsor Debconf14

When MOOC Professors Move

I came across this article on what happens when MOOC Professors move to different universities.  What happens to their courses and the resources used in their courses?  Read all about these from this website:

What can we learn from security failures?

After posting on the Apple goto fail bug, it is regrettable to have to talk about another serious, major bug in open source software so soon. This time it is more serious still, in that it has existed for over ten years, and is relied upon by many other pieces of standardly deployed open source software. The bug is strikingly similar to Apple’s, in that it happens as a result of code which is intended to signal an error but which through a subtle programming fault in fact fails to do so. This bug was found as a result of an audit commissioned by commercial Linux provider Red Hat, and the bug was discovered and publicised by its own author. What can we learn from these two failures in security critical open source code? For a start, it might lead us to question the so-called ‘Linus’ Law‘, first recorded by Eric Raymond:

Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix will be obvious to someone.

This is sometimes referred to as the ‘many eyes’ principle, and is cited by some open source proponents as a reason why open source should be more secure than closed source. This conclusion is, however, controversial, and this particular bug shows one reason why. In discussing the reasons why this bug slipped through ten years worth of review, the reviewer/author says the following:

As this code was on a critical part of the library it was touched and thus read, very rarely.

A naive view – and certainly one I’ve subscribed to in the past – is that critical code must surely get reviewed more frequently than non-critical code. In practice though, It can be the subject of a lot of assumptions, for example that it must be sound, given its importance, or that it should not be tinkered with idly and so is not worth reviewing.

So must we abandon the idea that source code availability leads to better security? As I said in the previous post, I think not. We just have to accept that source code availability in itself has no effect. It facilitates code review and improvement, if there’s a will to undertake that work. It makes it easy to share exactly what a bug was once it was found, and in turn it makes it easier for maintainers of other code bases to examine their own source for similar issues. Finally it allows anyone who finds a problem to fix it for themselves, and to share that fix. What we must not do is assume that because it is open source someone has already reviewed it, and – if this incident teaches anything at all – we must not assume that old, critical code is necessarily free of dumb errors.

5 lessons for OER from Open Source and Free Software

While the OER community owes some of its genesis to the open source and free software movements, there are some aspects of how and why these movements work that I think are missing or need greater emphasis.

open education week 2014

1. Its not what you share, its how you create it

One of the distinctive elements of the open source software movement are open development projects. These are the projects where software is developed cooperatively (not collaboratively, necessarily) in public, often by people contributing from multiple organisations. All the processes that lead to the creation and release of software – design, development, testing, planning – happen using publicly visible tools. Projects also actively try to grow their contributor base.

When a project has open and transparent governance, its much easier to encourage people to voluntarily provide effort free of charge that far exceeds what you could afford to pay for within a closed in-house project. (Of course, you have to give up a lot of control, but really, what was that worth?)

While there are some cooperative projects in the OER space, for example some of the open textbook projects, for the most part the act of creating the resources tends to be private; either the resources are created and released by individuals working alone, or developed by media teams privately within universities.

Also, in the open source world its very common for multiple companies to put effort into the same software projects as a way of reducing their development costs and improving the quality and sustainability of the software. I can’t think offhand of any examples of education organisations collaborating on designing materials on a larger scale – for example, cooperating to build a complete course.

Generally, the kind of open source activity OER most often resembles is the “code dump” where an organisation sticks an open license on something it has essentially abandoned. Instead, OER needs to be about open cooperation and open process right from the moment an idea for a resource occurs.

Admittedly, the most popular forms of OER today tend to be things like individual photos, powerpoint slides, and podcasts. That may partly be because there is not an open content creation culture that makes bigger pieces easier to produce.

2. Always provide “source code”

Many OERs are distributed without any sort of “source code”. In this respect, license aside, they don’t resemble open source software so much as “freeware” distributed as executables you can’t easily pick apart and modify.

Distributing the original components of a resource makes it much easier to modify and improve. For example, where the resource is in a composite format such as a PDF, eBook or slideshow, provide all the embedded images separately too, in their original resolution, or in their original editable forms for illustrations. For documents, provide the original layout files from the DPT software used to produce them (but see also point 5).

Even where an OER is a single photo, it doesn’t hurt to distribute the original raw image as well as the final optimised version. Likewise for a podcast or video the original lossless recordings can be made available, as individual clips suitable for re-editing.

Without “source code”, resources are hard to modify and improve upon.

3. Have an infrastructure to support the processes, not just the outputs

So far, OER infrastructure has mostly been about building repositories of finished artefacts but not the infrastructure for collaboratively creating artefacts in the open (wikis being an obvious exception).

I think a good starting point would be to promote GitHub as the go-to tool for managing the OER production process. (I’m not the only one to suggest this, Audrey Watters also blogged this idea)

Its such an easy way to create projects that are open from the outset, and has a built in mechanism for creating derivative works and contributing back improvements. It may not be the most obvious thing to use from the point of view of educators, but I think it would make it much clearer how to create OERs as an open process.

There have also been initiatives to do a sort of “GitHub for education” such as CourseFork that may fill the gap.

4. Have some clear principles that define what it is, and what it isn’t

There has been a lot written about OER (perhaps too much!) However what there isn’t is a clear set of criteria that something must meet to be considered OER.

For Free Software we have the Four Freedoms as defined by FSF:

If a piece of software doesn’t support all of these freedoms, it cannot be called Free Software. And there is a whole army of people out there who will make your life miserable if it doesn’t and you try to pass it off as such.

Likewise, to be “open source” means to support the complete Open Source Definition published by OSI. Again, if you try to pass off a project as being open source when it doesn’t support all of the points of the definition, there are a lot of people who will be happy to point out the error of your ways. And quite possibly sue you if you misuse one of the licenses.

If it isn’t open source according to the OSI definition, or free software according to the FSF definition, it isn’t some sort of “open software”. End of. There is no grey area.

Its also worth pointing out that while there is a lot of overlap between Free Software and Open Source at a functional level, how the criteria are expressed are also fundamentally important to their respective cultures and viewpoints.

The same distinctive viewpoints or cultures that underlie Free Software vs. Open Source are also present within what might be called the “OER movement”, and there has been some discussion of the differences between what might broadly be called “open”, “free”, and “gratis” OERs which could be a starting point.

However, while there are a lot of definitions of OER floating around, there hasn’t emerged any of these kind of recognised definitions and labels – no banners to rally to for those espousing these distinctions .

Now it may seem odd to suggest splitting into factions would be a way forward for a movement, but the tension between the Free Software and Open Source camps has I think been a net positive (of course those in each camp might disagree!) By aligning yourself with one or the other group you are making it clear what you stand for. You’ll probably also spend more of your time criticising the other group, and less time on infighting within your group!

Until some clear lines are drawn about what it really stands for, OER will continue to be whatever you want to make of it according to any of the dozens of competing definitions, leaving it vulnerable to openwashing.

5. Don’t make OERs that require proprietary software

OK, so most teachers and students still use Microsoft Office, and many designers use Adobe. However, its not that hard to develop resources that can be opened with and edited using free or open source software.

The key to this is to develop resources using open standards that allow interoperability with a wider range of tools.

This could become more of an issue if (or rather when) MOOC platforms start to  ”embrace and extend” common formats for authors to make use of their platform features. Again, there are open standards (such as IMS LTI and the Experience API) that mitigate against this. This is of course where CETIS comes in!

Is that it?

As I mentioned at the beginning of this post, OER to some extent is inspired by Open Source and Free Software, so it already incorporates many of the important lessons learned, such as building on (and to some extent simplifying and improving) the concept of free and open licenses. However, its about more than just licensing!

There may be other useful lessons to be learned and parallels drawn – add your own in the comments.

Originally posted on Scott’s personal blog

Open Source Open Standards 2014

The Open Source, Open Standards conference is in London on the 3rd of April 2014, and this year OSS Watch has joined the list of supporters for the event.

open source open standards 2014 conference logo

I attended the conference in 2013, which was well attended from across the public sector and from open source companies and organisations. You can read my post on that event here.

Given how closely aligned open standards and open source software are in the view of policy makers (though we do like to point out that its not quite that simple), its perhaps odd that so few events explicitly cover both bases.

The event is also interesting in that there are as many – more perhaps – talks from the customer point of view as from suppliers. This means it is much more about sharing experiences than the sales-oriented events that are commonly targeted at the public sector, and this is why we decided to be associated with the event this year.

Public sector organisations need to share experiences with each other, not just engage with suppliers, if they are to take advantage of the opportunities of open source software and open standards, and events like this are one place to do just that. (Of course, this applies equally to educational institutions such as universities and colleges – and the organisers are keen this year to open up the scope to include them.)

If you attend the event, feel free to say hello – one of us at least will be there on the day.

How Much Are We Responsible For..?

So, found more info on the project. Dhimitris found most of the sections where we need to add the comments, so that should work well. On the other hand, from what we can see, the last person to work on this project also went about building actual test classes for the project, so we’re not entirely sure whether or not to do the same.

There’s also some question about which sections still need to get done. Taking a look through the project, we found a few cases mentioning the need for ‘tests’, but didn’t specifically mention JUnit tests, so there’s some question of whether or not those were just lazy typo’s, or if they’re separate things we are not responsible for.

Most likely, we’ll first focus on the TODO requests in, and to add the @should comments. It seems that they have some Plug-in available that uses such comments to generate test classes, so the comments are first priority. Maybe afterwards, we might be making those test classes themselves, after asking the ticket publisher.

More Readings…

Well, finally got the files and such accessible from my own computer in eclipse, which was more of a hassle than I really expected, I found I had to read up more on JUnits and how they work. Took a bit of time looking through various online sources, so now I feel like I can really get into this.

Hopefully, at least. Jury might still be out on that one. Learned a few new things, at least.

Should Markdown become a standard?

We’re big fans of Markdown at OSS Watch; the lightweight format is how we create all the content for the OSS Watch website (thanks to the very nice Jekyll publishing engine).

Markdown is more or less a set of conventions for semi-structured plain text that can be fairly easily converted into HTML (or other formats). The idea is that its easy to read and write using text editors. This makes it great for wikis, and for source code documentation. Its the format used for most of the README files on Github, for example.

# This is a header

Some text with bits in **bold** or in *italics*.

> I'm a blockquote!

## Sub-header
* Bullet one
* Bullet two
* Bullet three

Markdown has flourished despite the fact it isn’t really an open standard as such. There are numerous “flavours” of Markdown, such as GitHub-Flavoured Markdown, and different processing engines have their own extensions, particularly in areas where there is no common agreement on how to represent things such as tables and definition lists.

Despite this lack of consistency, Markdown works very well without ever becoming a de jure standard. The closest its got to standardisation is a community site on GitHub following a call to action by Jeff Atwood.

What has brought this into focus for me is the discussion around open formats for the UK government. Putting the (huge) issue of ODF versus OOXML to one side, I would actually prefer it if more government content was in an easy to read plain text format rather than in any flavour of whopping great office-style documents. In fact, wouldn’t it be excellent if they were to use Markdown?

Which is where the problem lies – its difficult for government to mandate or even recognise “standards” that have no clear provenance or governance model arising out of some sort of recognisable standards organisation. This isn’t a problem when its just a case of “using whatever works” as an individual developer (which is how sort-of standards like Markdown and RSS take off), but seems to be a major barrier when trying to formulate policy.

So sadly, unless there is a new concerted effort to make some sort of standardised Markdown, I don’t think my dream of reading government documents in markdown using TextEdit or on GitHub are likely to become a reality.

International Conference on Technology in Education (ICTE) 2014

International Conference on Technology in Education (ICTE) 2014

2 - 4 July 2014
Hong Kong

Organized by:
the Open University of Hong Kong,
the School of Professional and Continuing Education of the University of Hong Kong (HKU SPACE), and
Caritas Institute of Higher Education (CIHE).

2nd Regional Symposium on Open Educational Resources

The 2nd Regional Symposium on Open Educational Resources - Beyond Advocacy, Research and Policy

Theme:  Beyond Advocacy, Research and Policy

24 - 27 June 2014
Wawasan Open University


(i)  Deadline for abstract submission:   16 March 2014
(ii) Deadline for full paper submission:  4 May 2014

Nanyang Girl’s High School featured on Apple for using iPad apps to help its students

Nanyang Girl’s High School featured on Apple for using iPad apps to help its students

Getting More Out of the IETF for Africans

Anyone reading my blog posts would likely have seen that I recently started to dump email correspondence on the blog. Some parts of these emails (or probably the whole correspondence) should remain confidential. I would agree with anyone; and it’s possible that some of the correspondents might not be comfortable seeing their messages on the Internet.

When there is a need for a change, and all parties  involved dearly want the change, it is necessary to make every information that can help improve the situation available to every member of the parties. To be specific, I, like some other people around the globe, want more contributions to the IETF from Africa. And as every next IETF meeting draws nearer, knowledge/domain-specific experts/industry players/scholars across the globe post on various mailing lists news about the upcoming meeting and the need for remote participation (for those that cannot attend the meeting). One of such mailing lists is the Afrinic mailing list. I am subscribed to the mailing list sometie ago. Few days ago, I saw a post about  the ongoing IETF89 meeting in London.

I kept tabs on the correspondence and saw that the correspondents were also asking themselves where/how/why is Africa under-represented, not meaningfully contributing or getting valuable results from the IETF meetings. I pointed them at some of my efforts at the IETF88 in Vancouver. And I later found that one or more people had expressed their concerns in one of the IETF mailing lists like I did.

Some of the correspondents have now asked me for the outcome of my efforts. Hence, I thought it would be wise to put everything out there on the Internet. Hopefully, someone somewhere somehow will see these threads and help give all our efforts a further push. Below is my interaction with some of the key ISOC members that drive the IETF. Good luck Africa and the other emerging economies. By the way, the Latin America region is highly visible at the IETF unlike Africa.




———- Original Message ———-
From: “Michael Adeyeye, PhD”
To: Ray Pelletier
Cc: Richard Barnes, Alexa Morris, Steve Conte
Date: 18 December 2013 at 06:32
Subject: Re: List of IETF Participants from Africa

Hi Ray,
You are perfectly right. Please see inline.

> On 16 December 2013 at 16:51 Ray Pelletier wrote:
> Michael
> I want to make sure I understand your request.
> You would like a mailing list created by the IETF for use to discuss IETF matters among IETFers from Africa, like
> You would like to send an email to Africans who have attended IETF meetings anytime over the last 5 – 10 years.
> We can set up such a mail list with you as Administrator.
> We do not give out email addresses. However, we would be willing to send an email drafted by you to those we can identify from our IETF
> attendance list as having selected a country in Africa for some number of years.

I am not too sure how this would work. If you say you won’t be able to give out their email addresses, what addresses will be in the mailing list?

The mailing list also needs to be updated as newcomers from Africa attend the IETF meetings. Preferably, the newcomers need to be added before “the meeting” comes up.

> Do I understand you correctly? Is there more, or less?

> Ray

———- Original Message ———-
From: “Michael Adeyeye, PhD”
To: Richard Barnes, Ray Pelletier
Cc: Steve Conte
Date: 16 December 2013 at 08:52
Subject: Re: List of IETF Participants from Africa

Thanks Richard. And apologies for not acknowledging receipt of your emails. I had been away for sometime. I just returned back from Nigeria.

> On 09 December 2013 at 21:58 Richard Barnes wrote:
> Hey Michael,
> I’m forwarding you over to Ray Pelletier, who is the Administrative
> Director for the IETF. He can help you get a message to the folks on
> your list. If you can provide him with a message (e.g., an invitation
> to join a mailing list), he can forward it to the group you’re
> interested in.
> Hope this helps,
> –Richard

———- Original Message ———-
From: “Michael Adeyeye, PhD”
To: Steve Conte
Date: 26 November 2013 at 21:25
Subject: Re: List of IETF Participants from Africa

Hi STeve and Richard,
Attached is a list containing attendees of the IETF meetings for the past 5-7yrs. Is it possible to get their email addresses from the secretariat?
The information is need by the task force committee.


On 14 November 2013 at 16:02 Steve Conte wrote:

Hi Michael,

The IETF website has past meeting proceedings, which include the plenary slides, that usually have participant data.  This can be found here:

However, one HUGE thing to keep in mind.. IETF meetings happen three times a year (approximately 15 days).  The Working Group mailing lists happen 365 days a year.  Producing statistics about meeting attendance is one thing, but I don’t feel it would really capture the full participation of a specific region.

As for getting that kind of data, I don’t know if there’s a way to do so, since participating in the IETF process only requires that you have a functional email address.


Steve Conte
Internet Leadership Programme
The Internet Society

Reply-To: “Michael Adeyeye, Ph.D”
Date: Thursday, November 14, 2013 12:29 AM
To: Steve Conte
Subject: List of IETF Participants from Africa

Hi Steve and Richard,
I don’t know who can help me get a list of the IETF participants from Africa for the last 5-10years.

We (the IETF African taskforce) would like to get their contact details. We will also need you to help set-up a mailing list that we can use to communicate with one another. Please send me the credentials for the mailing list so that I can administer it.


———- Original Message ———-
From: “Michael Adeyeye, Ph.D”
To: Steve Conte,
Date: 14 November 2013 at 09:29
Subject: List of IETF Participants from Africa

Hi Steve and Richard,
I don’t know who can help me get a list of the IETF participants from Africa for the last 5-10years.

We (the IETF African taskforce) would like to get their contact details. We will also need you to help set-up a mailing list that we can use to communicate with one another. Please send me the credentials for the mailing list so that I can administer it.


———- Original Message ———-
From: “Michael Adeyeye, Ph.D”
To: Christian O’Flaherty, Arturo Servin, Michuki Mwangi
Cc: “Alvaro Retana (aretana)”
Date: 13 November 2013 at 18:11
Subject: Re: [ericas] AFRICANs @ the IETF 88

Please reconfirm.

———- Original Message ———-
From: “Michael Adeyeye, Ph.D”
To: Arturo Servin, Christian O’Flaherty, Michuki Mwangi
Cc: “Alvaro Retana (aretana)”
Date: 13 November 2013 at 18:08
Subject: Re: [ericas] AFRICANs @ the IETF 88

H Everyone,
i thought the times were  PM (post meridiem) here, which would be AM in the west.
I just checked and saw that there was a mistake. I have now changed the times. The should now be “appropriate.”

Thank you for the understanding.


> On 13 November 2013 at 14:37 Christian O’Flaherty wrote:
> Hi Michael,
> Due to timezone differences the current suggestions are not appropriate
> for us. Could you please add more options in the 10AM-10PM UTC range?
> Thanks,
> Christian O’Flaherty -
> Regional Development – Internet Society
> On 11/13/13 11:21 AM, “Arturo Servin” wrote:
> >
> > Done!
> >
> >.as
> >
> >
> >On 11/13/13, 5:03 AM, Christian O’Flaherty wrote:
> >> Adding Michuki and Arturo to the poll.
> >>
> >>
> >>
> >>
> >>
> >> Christian
> >>
> >>
> >> On 11/13/13 5:46 AM, “Michael Adeyeye, Ph.D” wrote:
> >>
> >>> When will be convenient for us to talk?
> >>>
> >>> My skype id is “micadeyeye”
> >>>
> >>> Please forward the URL to anyone else you think we should get involved.
> >>>
> >>> Thanks.
> >>>
> >>>
> >>>
> >>> On 07 November 2013 at 16:14 “Alvaro Retana (aretana)”
> >>> wrote:
> >>>
> >>>
> >>> Michael:
> >>>
> >>> Hi!
> >>>
> >>> I’m adding Christian from the Internet Society who has been part of the
> >>> process in LATAM too. I believe he’s still in Vancouver.
> >>>
> >>>
> >>> I am traveling in Mexico this week. Let’s try and set up a call
> >>>sometime
> >>> next week. Let me know a couple of days/times that would work for you.
> >>>
> >>>
> >>> Alvaro.
> >>>
> >>> On 11/7/13 8:52 AM, “Michael Adeyeye, Ph.D” <
> >>> wrote:
> >>>
> >>>
> >>> Please go ahead. I wish you were still around though. We can arrange a
> >>> chat too.
> >>>
> >>>
> >>>
> >>> On 07 November 2013 at 15:45 “Alvaro Retana (aretana)” <
> >>> wrote:
> >>>
> >>> Michael:
> >>>
> >>> Hi! How are you?
> >>>
> >>> I am not African..and already had to leave Vancouver.. :-(
> >>>
> >>> In Latin America we (LACNOG) started a similar effort to increase
> >>> participation from persons in the region. We formed a task force
> >>>(which
> >>> I chair) and have been doing some activities. If interested, I would
> >>>be
> >>> happy to set out some time to talk about
> >>> the experience and what we’ve one.
> >>>
> >>> Regards,
> >>>
> >>> Alvaro.
> >>>
> >>> On 11/7/13 7:54 AM, “Michael Adeyeye, Ph.D” <
> >>> wrote:
> >>>
> >>>
> >>> Is
> >>> it possible for us to meet for a brief meeting today or tomorrow
> >>>(before
> >>> we all depart to our various destinations)?
> >>>
> >>> I would be interested
> >>> in talking to you all on how we can improve on our representation and
> >>> contributions to the IETF. It would also be good to discuss how we can
> >>> help develop the continent via this network.
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >

———- Original Message ———-
From: “Michael Adeyeye, Ph.D”
To: “Alvaro Retana (aretana)”
Cc: Christian O’Flaherty
Date: 13 November 2013 at 08:46
Subject: Re: [ericas] AFRICANs @ the IETF 88

When will be convenient for us to talk?

My skype id is “micadeyeye”

Please forward the URL to anyone else you think we should get involved.


On 07 November 2013 at 16:14 “Alvaro Retana (aretana)” wrote:



I’m adding Christian from the Internet Society who has been part of the process in LATAM too.  I believe he’s still in Vancouver.

I am traveling in Mexico this week.  Let’s try and set up a call sometime next week.  Let me know a couple of days/times that would work for you.


On 11/7/13 8:52 AM, “Michael Adeyeye, Ph.D” wrote:

Please go ahead. I wish you were still around though. We can arrange a chat too.

On 07 November 2013 at 15:45 “Alvaro Retana (aretana)” wrote:


Hi!  How are you?

I am not African..and already had to leave Vancouver..  :-(

In Latin America we (LACNOG) started a similar effort to increase participation from persons in the region.  We formed a task force (which I chair) and have been doing some activities.  If interested, I would be happy to set out some time to talk about the experience and what we’ve one.



On 11/7/13 7:54 AM, “Michael Adeyeye, Ph.D” wrote:

Is it possible for us to meet for a brief meeting today or tomorrow (before we all depart to our various destinations)?

I would be interested in talking to you all on how we can improve on our representation and contributions to the IETF. It would also be good to discuss how we can help develop the continent via this network.

———- Original Message ———-
From: “Michael Adeyeye, Ph.D”
To: Richard Barnes <>
Date: 11 November 2013 at 08:09
Subject: Re: [88attendees] AFRICANs @ the IETF 88

HI Richard,
Thanks a million for this note. I wrote down pretty much the same thing.

I will be getting in touch w.r.t the next course of action soonest.


> On 08 November 2013 at 19:21 Richard Barnes wrote:
> Hey Michael,
> Thanks for organizing. I thought it was a really good meeting, and I
> hope it succeeds in building some momentum. My notes for the session
> are below, in case they’re helpful.
> I really do think it’s important to get something concrete, whether
> that’s an I-D (or many I-Ds!) or a BoF, or a statement about why the
> IETF matters to ICT professionals. It’s important to build community
> and be supportive of people, but in my experience, you really need
> some concrete objectives in order to keep people interested.
> Please let me know if there’s *anything* I can do to help.
> Best,
> –Richard
> — Languages: how to accommodate
> — translation in meetings
> — newcomers in more languages
> — academics
> — DANE / DNS-based security
> — Constrained networks
> — LMAP reviews?
> — mentorship / responsiveness
> — scarcity of skilled people
> — Mesh networks: Village telco in ZA
> — where are the past fellows?
> — organize an IETF Africa group?
> — there’s a South American one
> — present at AfNOG?
> — like RIPE regional meetings?
> — awareness raising
> — 1 pager on what the IETF is, why it matters
> — IETF African Task Force: set up mailing list, use to publicize
> On Fri, Nov 8, 2013 at 8:34 AM, Michael Adeyeye, Ph.D
> wrote:
> > Hi Everyone,
> > Kensington is now booked for our meeting.
> > It’s starting at 9am; and please come by to share your experience and
> > thoughts.
> >
> > Regards.
> >
> >
> >
> > On 08 November 2013 at 11:53 “Michael Adeyeye, Ph.D” wrote:
> >
> > Many thanks goes to Spencer Dawkins and Fred Baker for the non-exhaustive
> > list of things we should also look into. They are highly informative.
> >
> >
> > There are just so much things to talk about that time wouldn’t permit us to
> > do this morning.
> >
> >
> > Parts of the Agenda:::
> > 1. Contributing to the IETF: The IETF 88 just showed many of us how ideas
> > are turned into standards. Wouldn’t it be good to see (native) African names
> > on an RFC, IAB, WG-chairs, e.t.c.? One way of doing so is by having a
> > “fighting spirit” with continuous mentorship/support from the IETF members &
> > various bodies. Just before the term “WebRTC or RTCWeb” came into the
> > limelight in 2011 (, someone from
> > Africa had earlier seen a need for it
> > (
> > It was in 2007 that the idea first came up and a proof of concept was later
> > developed ( Today, the RTCWeb Working Group is now
> > standardizing it. It started out as an application (OR an idea) -i.e.
> > getting SIP into browsers for browser-to-browser communication. I am certain
> > that there are some many other ideas like that coming out of Africa. We now
> > need to push ourselves further to get our names there.
> >
> > 2. Getting more people involved: AT the moment, over ten people (students,
> > academics, e.t.c.) from different African countries have asked me how they
> > can get involved in the IETF activities. SOme other IETF 88 fellows from the
> > continent have also suggested that we talk about ways of sharing our
> > experiences. The situation is not peculiar to Africa. Many thanks to the
> > task force from South America that now wants to guide us on possible
> > ways/solutions.
> >
> > 3. Re-imaging the world’s view about Africa: Yes, I used the word
> > “re-image.” I am referring to the computing concept from “virtual images.”
> > What people hear/see about the continent (mostly negative things) is
> > different from what they see, when the visit (some parts of) the continent.
> > How do we get the continent to earn its own respect like Asia and South
> > America? (Ref -
> >
> >
> >
> > Please feel free to dump your thoughts as you’ve been doing…..
> >
> > Regards.
> >
> >
> >> On 08 November 2013 at 08:30 Spencer Dawkins
> >> wrote:
> >>
> >>
> >> On 11/7/2013 8:24 PM, Fred Baker (fred) wrote:
> >> > On Nov 7, 2013, at 3:07 AM, “Michael Adeyeye, Ph.D”
> >> > wrote:
> >> >> I would be interested in talking to you all on how we can improve on
> >> >> our representation and contributions to the IETF. It would also be good to
> >> >> discuss how we can help develop the continent via this network.
> >> > An important consideration in this is that while your presence in
> >> > meetings is valuable, your presence on mailing lists is also valuable and
> >> > comparatively inexpensive. As a first step, you might consider looking
> >> > through the set of drafts labeled draft-ietf-*.txt, which is to say “working
> >> > group drafts”. Their working group will generally be the third word, like
> >> > draft-ietf-ospf or draft-ietf-v6ops. Access them online, and, if they
> >> > interest you, comment on them. The most interesting comments will be those
> >> > that improve them in some way – identify issues and suggest text. That will
> >> > get african viewpoints into discussions regarding current work product.
> >> >
> >> > BTW, the same goes for south americans and anyone else that feels
> >> > under-represented. Get involved on mailing lists.
> >> >
> >> > Daily news can be found at, and specifically
> >> > It takes a minute to look at it, and from
> >> > time to time you may find something of interest to comment on. You can also
> >> > go to
> >> >
> >> > If you need guidance regarding a given working group, the obvious people
> >> > to get it from are the chairs, which you can reach by emailing the
> >> > list for the working group. For example, if you want
> >> > to reach the v6ops chairs, email For a list of
> >> > the working groups and access to their charters and their mailing list
> >> > membership processes, go to
> >> >
> >> > The next step might include writing your own drafts and submitting them
> >> > for discussion. But you don’t need to rush into that; get a sense of what’s
> >> > going on and then contribute to it.
> >>
> >> I agree with Fred’s suggestions, and wanted to mention a couple of other
> >> things …
> >>
> >> If during your checking around you find problems with protocols we’re
> >> working on that don’t work in your particular country or environment,
> >> please tell us.
> >>
> >> I’m remembering (possibly dreaming, it’s been a long week) that GeoPriv
> >> was rolling along when someone somewhere in Asia pointed out that in
> >> their country, and perhaps only in their country, some civic addresses
> >> included *alleys*, and asked how these addresses should be encoded. If
> >> we hadn’t heard from participants from that country, we wouldn’t have
> >> known until someone tried to deploy products in that country
> >> (inconveniently late for a standards discussion).
> >>
> >> The TSV area has been looking at a tunneling/compression/multiplexing
> >> proposal (details at, and
> >> this is likely to pop back up at IETF 89 in London, if the BOF
> >> requestors ask for that). It turns out that we got some support from
> >> African participants who find it fits their connectivity to the rest of
> >> the Internet.
> >>
> >> You might also check out the discussions to date on the diversity
> >> mailing list, where people are doing things like asking what it would
> >> take to set up regional meetings for folks who can’t travel to an IETF
> >> meeting, so that more people can engage and contribute. See
> >> for
> >> the archive.
> >>
> >> If you’re thinking about how to help people back home who weren’t able
> >> to attend, you might also make use of training materials from the Sunday
> >> tutorials (for instance, the IETF 87 Newcomer’s Training is at
> >>
> >> – I just reported that the IETF 88 version returned a 404/not found).
> >> These aren’t all process tutorials, either – for instance, if people
> >> care about realtime applications and infrastructure,
> >>
> >> would be helpful.
> >>
> >> I hope this helps you and your colleagues contribute effectively to the
> >> IETF.
> >>
> >> Spencer, in this case, writing as an AD
> >>
> >
> >
> >
> >
> > _______________________________________________
> > 88attendees mailing list
> >
> >
> >

———- Original Message ———-
From: “Michael Adeyeye, Ph.D”
To: Spencer Dawkins, “Fred Baker (fred)”
Cc: “” <>
Date: 08 November 2013 at 17:34
Subject: Re: [88attendees] AFRICANs @ the IETF 88

Hi Everyone,
Kensington is now booked for our meeting.
It’s starting at 9am; and please come by to share your experience and thoughts.


———- Original Message ———-
From: “Michael Adeyeye, Ph.D”
To: Richard Barnes
Date: 08 November 2013 at 02:34
Subject: Re: [88attendees] AFRICANs @ the IETF 88

Please come!

> On 07 November 2013 at 16:09 Richard Barnes <> wrote:
> Hey guys,
> If it would be helpful to have an IESG member there, please let me
> know. I think it’s great that you guys are working on this, and I
> would be glad provide any help I can.
> –Richard
> On Thu, Nov 7, 2013 at 5:45 AM, Adama dembélé wrote:
> > Hi Adeye,
> > it is a good Idea, I am available this morning and tomorrow after 13H…
> > Wainting for your details…
> > Regards
> > ————————————————————————
> >
> > ________________________________
> > De : “Michael Adeyeye, Ph.D”
> > À :
> > Envoyé le : Jeudi 7 novembre 2013 3h07
> > Objet : [88attendees] AFRICANs @ the IETF 88
> >
> >
> > Hi The African folks@IETF 88,
> > Is it possible for us to meet for a brief meeting today or tomorrow (before
> > we all depart to our various destinations)?
> >
> > I would be interested in talking to you all on how we can improve on our
> > representation and contributions to the IETF. It would also be good to
> > discuss how we can help develop the continent via this network.
> >
> > Regards,
> > Michael
> >
> >
> >
> > _______________________________________________
> > 88attendees mailing list
> >
> >
> >
> >
> >
> > _______________________________________________
> > 88attendees mailing list
> >
> >
> >

———- Original Message ———-
From: “Michael Adeyeye, Ph.D”
To: Paul M, Ed Pascoe
Cc: Adama dembélé, “Edwin A. Opare”, “” <>
Date: 08 November 2013 at 01:52
Subject: Re: [88attendees] AFRICANs @ the IETF 88

Thanks Guys,
So, we will all be meeting as follows:
Venue: the Kensington room at the fourth floor
Time: 9am

- Where do we go from HERE?


On 08 November 2013 at 00:35 Ed Pascoe wrote:


Straight after the Fellows wrap up would work for me as well.


On Thu, Nov 7, 2013 at 2:58 PM, Paul M wrote:
Dear Michael, Adama et al,

We could schedule a meet up tomorrow morning right after the ISOC Fellows wrap up which will be taking place from 8:00 – 8:45 am at the Kensington room at the fourth floor. If all of you are present we could meet right after the wrap up perhaps in the same room.

Kindest Regards,

Paul M

On 7/11/2013, at 5:45 am, Adama dembélé wrote:

Hi Adeye,
it is a good Idea, I am available this morning and tomorrow after 13H…
Wainting for your details…

De : “Michael Adeyeye, Ph.D”
À :
Envoyé le : Jeudi 7 novembre 2013 3h07
Objet : [88attendees] AFRICANs @ the IETF 88

Hi The African folks@IETF 88,
Is it possible for us to meet for a brief meeting today or tomorrow (before we all depart to our various destinations)?

I would be interested in talking to you all on how we can improve on our representation and contributions to the IETF. It would also be good to discuss how we can help develop the continent via this network.


88attendees mailing list

88attendees mailing list

88attendees mailing list

———- Original Message ———-
From: “Michael Adeyeye, Ph.D”
Date: 07 November 2013 at 12:07
Subject: AFRICANs @ the IETF 88

Hi The African folks@IETF 88,
Is it possible for us to meet for a brief meeting today or tomorrow (before we all depart to our various destinations)?

I would be interested in talking to you all on how we can improve on our representation and contributions to the IETF. It would also be good to discuss how we can help develop the continent via this network.


GitHub Activities

Well….This is kind of annoying. I hate that feeling that I’m not even sure what the heck I’m doing. GitHub doesn’t seem to like cooperation with me. I think I’ve gotten it downloaded properly, but I seem to have trouble finding things to edit and such.

I have been able to edit a few things on the School/Class Wiki, though. Managed to update my own page to include some of the information from the recent ‘assignment’ from class (blog link, OpenMRS ID, GitHub Username), and added a function to collapse all that information from the Issue Tracker activity. I find it a lot easier to work with that, seeing as if I’m not sure what I need to do, I can find another page with the formatting I’m looking for, then copy/experiment. Might look into doing some things for our project from there. Already started making the page, at least (though, it was mostly copy/paste from the original CS 401 page).

I managed to get linked to the project ticket for our group, though that was mostly them adding me, than anything else, and I think I’ve copied the files from the original to my machine, but I’m still looking for a way to edit….

Well, now I think I’m getting somewhere updating eclipse. Had to go in and update what I currently had on my computer, and then install Maven, though there were a few errors popping up. If I avoid the optional things, maybe…

Well, it seems to be working. Maybe. Fingers crossed…

Ok….. most of these downloads are going over 100%…… Hope nothing’s broken… Please? Well, restarting eclipse again…

HA! Finally got everything working (so far as I can tell). Hopefully this will let me keep working.

Our Draft (MSRP over WebRTC data channels) @ the IETF89

Our draft (MSRP over WebRTC data channels) has now got a further push at the IETF. As we were advised to take it (from the dispatch WG) to the MMUSIC working group  at the IETF88 in Vancouver, we have now done so.

An update on the work will be presented at the MMUSIC WG session ( on Thursday at 10:30-10:50am. If you are attending the IETF89 in London, kindly come join us at the Buckingham room. And if you can’t come, a remote participation is welcome. Please visit

A new Voice in the crowd

6 Months ago, 4 journalists quit their respective jobs at the leading UK Linux magazine, Linux Format.  Today, a new magazine hit the shelves of the country’s newsagents: Linux Voice.

Linux Voice issue 1 on a shelf

Linux Voice on the shelves of a popular high-street newsagent (yes, that one)

With the same team behind it, Linux Voice has the same feel and a similar structure to old issues of Linux Format. However, Linux Voice aims to be different to other Linux publications in 3 key ways: It’s independent, so only answerable to readers; 9 months after publication, all issues will be licensed CC-BY-SA; 50% of the profits at the end of each financial year will be donated to free software projects, as chosen by the readers.

Linux Voice's Copyright notice, including an automatic re-licensing clause

Linux Voice’s Copyright notice, including an automatic re-licensing clause

By presenting itself with these key principles, Linux Voice embodies in a publication the spirit of the community it serves, which provides a compelling USP for free software fans.  On top of that, Linux Voice was able to get started thanks to a very successful crowd funding campaign on IndieGoGo, allowing the community to take a real sense of ownership.

Aside from the business model, the first issue contains some great content.  There’s a 2-page section on games for Linux, which would have been hard to fill two years ago, but is now sure to grow.  There’s a round-up of encryption tools looking at security, usability and performance, to help average users keep their data safe.  There’s a bundle of features and tutorials, including homebrew monitoring with a RaspberryPi and PGP email encryption. Plus, of course, letters from users, news, and the usual regulars you’d expect from any magazine.

I’m particularly impressed by what appears to be a series of articles about the work of some of the female pioneers of computing. Issue 1 contains a tutorial looking at the work of Ada Lovelace, and Issue 2 promises to bring us the work of Grace Hopper.  It’s great to see a publication shining the spotlight on some of the early hackers, and it’s fascinating to see how it was done before the days of IDEs, text editors, or even in some cases electricity!

For your £6 (less if you subscribe) you get 114 pages jammed with great content, plus a DVD with various linux distros and other software to play with. Well worth it in my opinion, and I look forward to Issue 2!

Open Source Phones at MWC

Mobile World Congress is running this week in Barcelona.  While it’s predicable that we’ve seen lots of Android phones, including the big unveiling of the Galaxy S5 from Samsung, I’ve found it interesting to see the coverage of the other devices powered by open source technologies.

Mozilla announced their plans for a smartphone that could retail for as little as $25. It’s based on a new system-on-chip platform that integrates a 1GHz processor, 1GB of RAM and 2GB of flash memory, and will of course be running the open source Firefox OS.  It’s very much an entry level smartphone, but the $25 price point gives real weight to Mozilla’s ambition to target the “next billion” web users in developing countries.

Ubuntu Touch is finally seeing the light of day on 2 phones, one from Chinese manufacturer Meizu and one from Spanish manufacturer Bq.  Both phones are currently sold running Android, but will ship with Ubuntu later this year.  The phones’ internals have high-end performance in mind, with the Meizu sporting an 8-core processor and 2GB of RAM, clearly chosen to deliver Ubuntu’s fabled “convergence story”.

There’s been rumours abound this year that Nokia have been planning to release an Android smartphone, and they confirmed the rumours were true at MWC, sort of.  “Nokia X” will be a fork of Android with its own app store (as well as third-party ones) and a custom interface that borrows elements from Nokia’s Asha platform and Windows Phone.  Questions were raised at the rumour mill over whether Microsoft’s takeover of Nokia’s smartphone business would prevent an Android-based Nokia being possible.  However, Microsoft’s vice-president for operating systems Joe Belfiore said “Whatever they do, we’re very supportive of them,” while Nokia’s Stephen Elop maintains that the Windows-based Lumia range is still their primary smartphone product.

A slightly more left-field offering comes in the shape of Samsung’s Gear 2 “smartwatch” running Tizen, the apparently-not-dead-after-all successor to Maemo, Meego, LiMo, and all those other Linux-based mobile operating systems that never quite made it.  The device is designed to link up to the Samsung Galaxy range of Android phones, but with the dropping of “Galaxy” from the Gear’s branding, perhaps we’ll be seeing a new brand of Tizen powered smartphones from Samsung in the future.

GotoFail, Open Source and Edward Snowden

On Friday Apple released a patch for a flaw in one of their core security libraries. The library is used both in Apple’s mobile operating system iOS, and their desktop operating system OSX. As of today, the desktop version has yet to be patched. This flaw, and its aftermath, are interesting for a number of reasons.

Firstly, it’s very serious. The bug means that insecure network connections are falsely identified as secure by the operating system. This means that the flaw has an impact across numerous programs; anything that relies on the operating system to negotiate a secure connection could potentially be affected. This makes a whole range of services like web and mail vulnerable to so-called ‘man-in-the-middle’ attacks where a disreputable network host intercepts your network traffic, and potentially thereby gains access to your personal information.

Secondly, the flaw was dumb. The code in question includes an unnecessarily duplicated ‘goto’, highlighted here:

It looks like a cut-and-paste error, as the rogue ‘goto’ is indented as though it is conditional when – unlike the one above it – it is not. There are many reasons a bug like this ought not to get through quality assurance. It results in unreachable code, which the compiler would normally warn about. It would have been obvious if the code had been run through a tool that checks coding style, another common best practice precaution. Apple have received a huge amount of criticism for both the severity and the ‘simplicity’ of this bug.

Thirdly, and this is where we take a turn into the world of free and open source software, the code in question is part of Apple’s open source release programme. That is why I can post an image of the source code up there, and why critics of Apple have been able to see exactly how dumb this bug is. So one effect of Apple making the code open source has been that – arguably – it has increased the anger and ridicule to which they have been exposed. Without the source being available, we would have a far less clear idea of how dumb a mistake this was. Alright, one might argue, open source release makes your mistakes clear, but it also lets anyone fix them. That is a good trade-off, you might say. Unfortunately, in this case, it is not that simple. Despite being open source, the security framework in question is not provided by Apple in a state which makes it easy to modify and rebuild. Third party hackers have found it easier to fix the OSX bug by patching the faulty binary – normally a much more difficult route – rather than using Apple’s open source code to compile a fixed binary.

It is often argued that one key benefit of open source is that it permits code review by anyone. In this case, though, despite being a key security implementation and being available to review for over a year, this bug was not seemingly identified via source review. For me, this once again underlines that – while universal code review is a notional benefit of open source release – in practice it is universal ability to fix bugs once they’re found that is the strongest argument for source availability strengthening security. In this case Apple facilitated the former goal but made the latter problematic, and thereby in my opinion seriously reduced the security benefit open source might have brought.

Finally, it is interesting to note that a large number of commentators have asked whether this bug might have been deliberate. In the atmosphere of caution over security brought about by Edward Snowden’s revelations, these questions naturally arise. Did Apple deliberately break their own security at the request of the authorities? Obviously we cannot know. However it is interesting to note the relation between that possibility and the idea that open source is a weapon against deliberate implantation of flaws in software.

Bruce Schneier, the security analyst brought in by The Guardian to comment on Snowden’s original documents, noted in his commentary that the use of free and open source software was a means of combating national security agencies and their nasty habit of implanting and exploiting software flaws. After all if you can study the source you can see the backdoors, right? Leaving aside the issue of compromised compiler binaries, which might poison your binaries even when the source is ‘clean’, the GotoFail incident raises another question about the efficacy of open source as a weapon against government snooping. Whether deliberate or not, this flaw has been available for review for over a year.

The internet is throbbing with the schadenfreude of programmers and others attacking Apple over their dumbness. Yet isn’t another lesson of this debacle that we cannot rely on open source release on it’s own to act as a guarantee that our security critical code is neither compromised nor just plain bad?

An Obfuscated String Implementation for Go

Recently, I purchased the domain, and I was possessed with the urge of making an actual dating website. I thought it would be cool to write it in Go, since that's one of the hip new languages of the now, and also because it does a lot of things that I really like.

I started by writing the account management and registration code. This is surprisingly tricky, because handling sensitive data with any semblance of security is a pain. Luckily, Go exposes a number of very useful syscalls for us to work with.

Since I'm not offering enough click-bait on my block, I feel like it's only appropriate to turn the rest of this post into a listicle.

4 Weird Tricks One Florida Man Used to Protect Sensitive Data in Memory!!! (Hackers hate him!)

1. mlock your pages

Allocate as many pages of memory as you need to hold your string. Then mlock them immediately. mlock is a portable function that strongly encourages your kernel not to swap out a page of memory.

Why is this useful?

Swap doesn't get zeroed out, and there's no way to guarantee when it will get overwritten, so you don't know how long any persisted data will last. While encrypting your swap is a good idea, you can't expect everyone to encrypt their swap paritions.

Any caveats?

Hibernating will still persist passwords to swap. Sorry.

Solaris and Solaris-based operating systems require that you grant the proc_lock_memory privilege before a user can run anything that calls mlock.

Memory that you mlock must be page-aligned. I mmap my sensitive memory to ensure this.

2. mprotect your pages

mprotect assigns permissions to your pages. I typically write my sensitive data, and then immediately mprotect the page such that it's read-only. If any process tries to write or execute that memory, that triggers a segfault, which is much better than tampering with sensitive memory.

Why is this useful?

Instead of a rogue buffer overrun wiping your sensitive data, you instead trigger a segfault. This helps prevent tampering, either malicious or accidental, of your sensitive data.

Any caveats?

Again, the memory that you pass into mprotect must be page-aligned.

3. Encrypt your sensitive strings

It's really just obfuscation rather than offering real security, but it's a worthwhile tactic. Passwords become less trivially identifiable if they're encrypted in memory.

Why is this important?

If an attacker gets a memory dump, or if a machine hibernated and an attacker has your swap device, it makes it far harder to identify a password.

Any caveats?

Hell yes.

The key needs to be stored in memory. If the attacker knows where in memory your key is, then it's trivial to decrypt your sensitive string. It really just ends up being obfuscation more than anything.

4. memset_s your pages before you munmap them

memset_s is a new C11 function that will memset with the guarantee that it won't be optimized out.

If the value that you memset isn't used after the memset, smart compilers will optimize it away. If you're clearing sensitive memory after you're done using it, this is definitely not what you want.

Why is this important?

When you free or munmap memory, it isn't wiped before being given back to the OS. If an attacker gets a memory dump after you have freed a sensitive block of information, it's very possible that the value is still in memory.

memset isn't sufficient for the reason I outlined above.

Any caveats?

memset_s isn't implemented in many libc's. If this is a case for your targets, then check out this implementation from CERT.

Now where the hell does Go come into all of this?

Since I'm working with passwords for in Go, I felt the need to write a secure password implementation in Go. In it, I did all of these things, save for clearing my data with memset_s. As far as I'm aware, Go's compiler won't optimize away a normal memset, so I'm safe there.

The code for the secstring package is on GitHub. The documentation is on GoDoc.

I welcome any bugs/criticisms/questions/audits.

Week 4: 17 February 2014

[OpenMRS Developers Setup]

From the get go I was having installation troubles for the SDK. I followed the instructions located here:

I first tried to get it successfully installed on my laptop to no avail. I installed Java JDK fine along with the JRE. I installed the OpenMRS installer for Windows version 1.0.6 fine. Below is the code snippet from my cmd window:


OMRS Version: “1.0.5″
OMRS Home: C:\Program Files (x86)\omrssdk-1.0.6
ORMS Scripts: C:\Program Files (x86)\omrssdk-1.0.6\bin
OMRS Maven Home: C:\Program Files (x86)\omrssdk-1.0.6\apache-maven
Executing: “C:\Program Files (x86)\omrssdk-1.0.6\apache-maven\bin\mvn.bat” –ver
Apache Maven 3.1.0 (893ca28a1da9d5f51ac03827af98bb730128f9f2; 2013-06-27 22:15:3
Maven home: C:\Program Files (x86)\omrssdk-1.0.6\apache-maven\bin\..
Java version: 1.7.0_51, vendor: Oracle Corporation
Java home: C:\Program Files (x86)\Java\jdk1.7.0_51\jre
Default locale: en_US, platform encoding: Cp1252
OS name: “windows 8″, version: “6.2″, arch: “x86″, family: “windows”


After I knew I had the correct JDK and OMRS version installed correctly, I tried to create the module. It went through its downloads of files and came up with no errors after I left all default values. No matter what I tried at this point, I could not get the command “omrs-run” to see the configuration. I was sure I forked the correct project to GitHub correctly and I verified with other students but we could not figure out why it was not working.

Duplicating the whole process again on my desktop yielded the same results. I will have to investigate further as to why the module/configuration is not being seen and update upon further info.



Week 3: 10 February 2014

[Wiki Editing]

This week’s learning on editing the Wiki page was actually very helpful and easy to pick up on. I can see why wikis are being more and more common for compiling information or for use as knowledge bases. I began by going to to the CS wiki and logging into my account. I wrote a brief description about me and then added a link to this page under the Students page. Editing in a wiki was incredibly easy to do and straight forward and I had no problem with it.

The Issue Tracker Activity was also very informative on how to use formatting and editing for a wiki. As I proceeded through the activity, I learned from the few mistakes I made in formatting so my responses were easier to read, but in the end I got the hang of it.


[Issue Tracker Activity/Three Issues]

Viewing the info on the OpenMRS Issue Tracker was also sorted out very well and made it easy to read and follow how tickets and bugs were logged. The filters make sense on how one would want to search for a particular type of issue. I was able to easily find all of the necessary field and type information, but I did have trouble at first finding the summary of the project. I was looking for a description or paragraph summary but did not realize it was right on the front page shown with data.

After perusing the list on all the tickets at a glance, I ended up choosing three according to my comfort level (programming level as well as overall knowledge of the project so far). I couldn’t think of any other ways the system might have been improved upon, but there certainly are ways. To me as an outsider to the core of the project, it seemed to suffice very well for giving someone cursory information.


[Git Videos and Tutorials]

Following the Git tutorial was very useful and gave me a tool that I can come back to in case I need to remember certain commands without having to look them up without example uses. I  completed it after about 10-15 minutes, but I still had some general confusion about what the syntax for some of the commands were (I do not feel at all comfortable operating in a Linux/Unix environment, even after my classes here at WSU). As they came up in the example terminal, I did somewhat look up what they meant but did not come up with anything really concrete as a definition or further examples of how to use them.

The Git videos were also a good reference to go alongside with while watching on my desktop and following along on my laptop. I have never used GitHub before this class (unfortunate we are learning it so late) so a further explanation with maps and diagrams was helpful. One thing I did notice though was that even though these were tutorial videos on using Git and what it is, it still was not as detailed as I would have wanted it to be for some areas.

Mozilla at Southern California Linux Expo

IMG 20140223 103106 300x225 Mozilla at Southern California Linux ExpoFrom Wednesday through Sunday, I was in Los Angeles for the Southern California Linux Expo (Scale12x) held at the Hilton LAX. This was my first Scale12x and this was the second year that Mozilla had official presence.

To be honest, I’m not totally sure what I was expecting when going to Scale12x. I thought it was going to be a typical regional linux expo but Scale12x was a larger event then I could have imagined.

Thursday, I spent a lot of time meeting with Firefox Users and people from other open source projects. In the evening, I joined Brandon Burton (Mozilla), Chris Turra (Mozilla), Jordan Sissel (Elasticsearch) and Michael Stahnke (PuppetLabs) for a DevOps Dinner.

On Friday, I took the opportunity to visit a number of talks including Brandon Burton’s lightning talk during the DevOps track.  I met up with Casey Becking, a fellow Mozilla Rep, and we hung out, visited some talks, and checked out the expo floor in advance of setup.

IMG 20140222 092106 300x222 Mozilla at Southern California Linux ExpoThat evening myself, Casey Becking and Joanna Mazgaj went out for a Mozilla Group Dinner. We went to a place called Akbar in Marina Del Rey. The dinner was excellent and it was great to get together and spend time with fellow reps before the expo floor opened.

Saturday and Sunday were the days of the expo hall and I woke up about 6:00am and headed down to setup our booth early. By 9:00am we were starting to see the first attendees trickle in. I would estimate in the first few hours we saw close to five hundred attendees and by end of day closer to a thousand.

At one point, a group of students stopped by from L.A.’s Roosevelt High School and I gave them a short overview of Mozilla OpenBadges and discussed how their school could use the platform to recognize students for various achievements. The faculty the accompanied the students said they were very interested in the program.

IMG 20140222 091441 300x222 Mozilla at Southern California Linux ExpoThere was a tremendous amount of interest surrounding Firefox OS I would say 90% or better of attendees that visited the booth demoed Firefox OS or asked questions. Many asked us when they could buy a device in North America.

To my knowledge we were only asked about Directory Tiles on two occasions. Both of those conversations were positive once we explained it some and pointed out Mitchell’s blog post.

Sunday was a bit slower since most attendees has visited the booth although some people who arrived last day for the expo did stop by and some attendees visited us again. All in all, the event was very positive and a lot of buzz was generated in addition to getting information out there about how to contribute to Firefox OS, where a device can be purchased, and the progress of the project.

 Mozilla at Southern California Linux Expo

Google Summer of Code 2014 is nearly here – are you ready?

2014 marks the 10th anniversary of the Google Summer of Code, a competition that brings  students and open source communities together through summer coding projects.

Google Summer of Code 2014 logo

Each year the competition gathers ideas for contributions from a wide range of projects and foundations, and invites proposals from students for tackling them. As well as the experience of working on major projects with expert mentors, students are also paid a stipend of $5,500 for their effort.

With the date for accepting proposals only a few weeks away (10th-21st March), its time to get a move on!

Personally I think GSoC is an excellent opportunity for students to develop real-world development and project participation skills and make connections that will be useful after they graduate, and I’m always surprised how few students from the UK apply each year.

If you are a lecturer in a university, its a good time to raise awareness of the competition with your students. You can download a flyer here for example.

Time too tight? Well, after the Google Summer of Code, there is also the VALS Semester of Code in the Spring. 

Week 2: 3 February 2014


The IRC activity that we did in class was awkward to say the least. For starters, before going into further detail, I did not know people still used IRC in today’s age of the internet. It seems that it is extremely dated and only suited for some of the “hardcore” or old-school crowd that are stuck in the old ways. After downloading a few clients (experimenting with some proved to be more difficult or hard to navigate) I landed on one that was recommended to me by Brian Gibson — HexChat. After setting up the initial server and identity for myself, I connected pretty easily.

The whole conversation that took place afterwards was silently done inside the IRC. It gave me a first glimpse of how to use it and to speak my thoughts clearly via an online forum. The in-class lack of speaking was a little weird, but the exercise was meant to show us how to use it in case we could not meet for class. All in all, I think it was valuable even though I thought the method of doing online meetings was a little dated.



The readings for this week were a good statement to me personally as to how I could contribute more to the project. I do not believe I have anything much higher than an average programming level, nor do I have extensive knowledge into other languages and tools other than what I’ve used during my major progress at WSU. So joining this open source project seemed a little daunting. I did like to see that there were tons of other ways to contribute (I’m more keen to the documentation and bug tracking part). Some of the examples I could tell were a little “out there” in terms of how they were actually going to help, but I can see the end result of it all.

Reading about the bug tracking methods and etiquette was not 100% new information, seeing as I have worked in an environment before where bugs were coming up from clients and we were helping resolve them. For a programming aspect, though, it is applicable in almost the same way. I think the OpenMRS bug tracking feature is well developed (it has to be at the scale it is at this point) and should be easy to pick up on as bug tracking becomes part of the assignment.

On Community Sentiment

locomap 300x152 On Community SentimentThe LoCo Council released an interim report on their census efforts of Ubuntu LoCo Teams and the results were consistent with findings when I did a census of unapproved LoCo Team’s in the United States in 2012. When I did that health check, I found that most LoCos were either unresponsive or had less than positive sentiments on the direction of the project and some had chosen to let their approval lapse on purpose.

It was a bit discouraging back in 2012 and I remember at that time mentioning it to the Canonical Community Team and suggesting that it would really be nice to see more community building in general and more resources for LoCos. Here we are in 2014 and the pulse of the community has not changed. In fact, there are less approved LoCos today than there was in 2012, albeit there are now more resources today for the community than there were in 2012 which is something positive.

I think 2014 could be a big year for the Ubuntu Community to make plans on reviving LoCos in the United States and elsewhere. The LoCo Council is helping out by giving us a good measurement of how things stand. But, once we know what the pulse is like, how do we address the state of things? I think it might be good for the upcoming UDS for someone in the community to own a session on brainstorming ideas to reawaken local communities in the United States and elsewhere and then plan to start putting those ideas to work this year and next.

I believe Ubuntu Local Communities can be vibrant again and there are lots of shining examples of local communities that are still very healthy like Ubuntu California, Ubuntu Peru and Ubuntu Italy just to name a few.

I think we need community contributors to feel excited about Ubuntu again and feel like real stakeholders in the project and I think we need to care strongly about community sentiment. We need to bring back the culture where community was one of the most important aspects of Ubuntu.


“I still consider myself an Ubuntu Community Member, but I don’t think there’s any community left.” – Paul Tagliamonte, Former LoCo Council Member

 On Community Sentiment

OSS Watch publishes National Software Survey 2013

OSS Watch, supported by Jisc, has conducted the National Software Survey roughly every two years since 2003. The survey studies the status of open and closed source software in both Further Education (FE) and Higher Education (HE) institutions in the UK. OSS Watch is a non-advocacy information service covering free and open source software. We do not necessarily advocate the adoption of free and open source software. We do however advocate the consideration of all viable software solutions – free or constrained, open or closed – as the best means of achieving value for money during procurement.

Throughout this report the term “open source” is used for brevity’s sake to indicate both free software and open source software.

Summary of National Software Survey findings - findings can be found in full in the report linked below

Looking back over 10 years of surveys, we can see how open source has grown in terms of its impact on ICT in the HE and FE sectors. For example, when we first ran our survey in 2003, the term “open source” was to be found in only 30% of ICT policies – and in some of those it was because open source software was prohibited! In our 2013 survey we now find open source considered as an option in the majority of institutions.

Open source software has also grown as an option for procurement; while only a small number of institutions use mostly open source software, all institutions now report they use a mix of open source and closed source.

However, the picture is not all positive for open source advocates, and we’ve noticed the differences between HE and FE becoming more pronounced.

You can read the full report online, or download the PDF from the OSS Watch website.

Will Firefox Really Have Ads?

index 300x300 Will Firefox Really Have Ads?There has been a lot of sensational writing by a number of media outlets over the last 24 hours in reaction to a post by Darren Herman who is VP of Content Services. Lots of people have been asking me whether there will be ads in Firefox and pointing out these articles that have sprung up everywhere.

So first I want to look at the Merriam Webster definition of an Advertisement:


noun \ˌad-vər-ˈtīz-mənt; əd-ˈvər-təz-mənt, -tə-smənt\

: something (such as a short film or a written notice) that is shown or presented to the public to help sell a product or to make an announcement

: a person or thing that shows how good or effective something is

: the act or process of advertising

Great, now that we have the definition it looks like the fact that Mozilla announces to users their rights and asks about what data choices they make alone meet the criteria of an advertisement. The question is: does the average user consider that an advertisement or a useful bit of content that Mozilla is trying to share? Next, lets look at the fact that Firefox uses Google as a default search/home page and boom, if we use the literal sense of the definition that too is an advertisement isn’t it?

So now lets move on to Darren’s post. While I think the post could have had stronger context and there could have been a better response to address some of these concerns, the basic gist here is that Mozilla plans to offer tiles where there was a gap in content and that some of those tiles may be sponsored but will still be consistent with Mozilla’s values. Furthermore, this new content will only be displayed to new users and its unlikely you will see this anytime soon since it has not landed in Mozilla-Central and gone through the processes necessary to make it to a stable release.

Personally I think this is much ado over nothing and I think this feature and features like UP (User Personalization) are going to be very helpful to users and bake in some of the content that add-ons have typically provided.
Update: Please see Mitchell Baker’s (Chief Lizard Wrangler) and Denelle Dixon-Thayer’s (Mozilla General Counsel’s) have both posted on this topic.  Please feel free to read their posts:
 Will Firefox Really Have Ads?

Test Driven Learning: setting learning goals for yourself, Software Engineering edition

Stacey asked me for a refresher on Test Driven Learning for Hacker School, so here we go.

Test Driven Learning is a software engineer’s articulation of Wiggin & McTighe’s Understanding by Design framework after being strongly influenced by Ruth Streveler’s ”Curriculum, Assessment, and Pedagogy” course at Purdue.

Many software engineers are familiar with the process of Test Driven Development (TDD).

  1. Decide on the goal.
  2. Write the test (“how will you know if it’s working, exactly?”)
  3. Make the code pass the test.
  4. Celebrate.


Test Driven Learning (TDL) simply says “it’s the same thing… for your brainnnnn!”

  1. Decide on the goal (“learning objective”).
  2. Design the assessment (“how will you know if you’ve learned it, exactly?”)
  3. Go through the experiences/etc. you need to pass your assessment.
  4. Celebrate.


That’s it. Really.

Step 2 is the part most people flub. With software tests, you have a compiler/interpreter forcing you to be precise. With learning assessments, you don’t — but you need exactly the same level of precision and external execution. If you asked a group of external people (with appropriate expertise) whether you’d passed the assessment you set for yourself, there should be no disagreement. If there’s disagreement, your assessment needs a redesign.

A good assessment is a goal that helps you stretch and reach it; sometimes it encourages you to do more. But sometimes it also gives you permission to stop doing stuff – you’ve written the code, you’ve delivered the talk, they met the criteria you set —  and now you’re done. You can absolutely set a new goal up and keep on learning. However, you’re no longer allowed to say you Haven’t Learned X, because you’ve just proven that you have.

Here are some rough-draft quality TDL assessments you might start with, and a bit of how you might improve them.

I will learn Python. (What does that even mean? How will you know you’ve learned it?) I will complete and pass any 50 CodingBat exercises in Python. (But I could do that by solving 50 really easy problems.) Only 10 of those 50 problems can be warm-ups, and at least 20 of them must be Medium difficulty or greater. (Does it matter if you get help with the problems?) Nope, I can get as much help as I want from anyone, as long as I could explain the final solution to another programmer.

I will get better at testing. (What do you mean by “testing”?) I write a lot of code, but I’ve never written tests for any of it. I hear the nose framework is nice. (What do you mean by “better”?) Well, I’ve never written a test at all, so even going from 0 to 1 would be an improvement. I could use nose to write tests for 3 different pieces of working code I’ve already written. (Do these need to be big or exhaustive tests?) Nope, I’m just trying to learn what writing tests is like, not get full test coverage on my code… at least not yet. Even if I write a 3-line test that checks out one minor function, it counts as one of the 3 tests. (What does it mean for a test to be “done”?) When someone else can check out and successfully run my code and my test suite on their computer without needing to modify either bit of code, it’s done.

I will understand how databases work. (By “understand,” do you mean the mathematical theory behind their design? Or how to actually implement and use one?) Oh geez, the latter. I don’t care about the math so long as I know how to interface with a database. Any sort of database. (So you need to make a demo.) Yeah, but that’s not enough; I can blindly type in code from a tutorial, but that doesn’t mean I’d be able to field questions on it. (What could you do about that?) I will give a presentation to fellow Hacker Schoolers demonstrating a small database interaction in code I have written. That’s an easy binary to check; either I’ve given the presentation or I haven’t.

Thoughts, questions, ideas? Got your own example TDL assessment (at any stage of revision), or ways to improve the ones above? Holler in the comments.

The Day We Fight Back!

Tomorrow is The Day We Fight Back and its good to know Mozilla is lending its name to support this day of action which focuses on restoring privacy and ending spying by intelligence agencies. You can signup and add the code or if you use WordPress install the plugin and help support TDWFB!

 The Day We Fight Back!

{{ post.title }}

{{ post.content | xml_escape }}

Attending Scale12x

firefox os launch of first devices 300x168 Attending Scale12xWhere are you going to be next week?

I know where I will be. I will be attending Scale12x (Southern California Linux Expo) in Los Angeles, CA. I will be there most of the week working with Casey and Joanna to man the Mozilla booth where we will be evangelizing Firefox OS. This is going to be Mozilla’s second year at  the Southern California Linux Expo and I have to say were really excited to be there again after lots of interest in Firefox OS and other Mozilla projects last year.

I’m also really looking forward to checking out some of the talks by Community Contributors at the UbuCon which is taking place at Scale12x. If you are attending Scale12x and want to catch up for a beer or lunch/dinner hit me up on Twitter and we will set something up.

Also be sure to check out Where’s Mozilla to learn about other events Mozilla will have a presence at!

Is license compatibility worth worrying about?

At FOSDEM last weekend I saw an excellent talk by Richard Fontana entitled Taking License Compatibility Semi-Seriously. The talk took a look at the origins of compatibility issues between free and open source software licences, how efforts have been made to either address them directly or dodge around them, and ask whether it’s worth worrying about them in the first place.  This post will summarise the talk and delve into some of the points I found most interesting.

The idea of FOSS license compatibility isn’t one that was created alongside the FOSS movements, but rather one that came about when projects started to combine code released under different licences, particularly copyleft and non-copyleft licenses.  As such, there’s no real definition of what license compatibility means, and so people tend to defer to received doctrine (such as the FSF’s list of GPL compatible licenses), or leave it up to lawyers to sort out.

Early versions of KDE and Qt created the biggest significant license compatibility issue in the FOSS world.  Qt’s original proprietary license and later the QPL under which it was relicenced were considered incompatible with the GPLv2 under which the KDE project (or at least, parts of it) were licensed.  Qt is now dual-licensed under LGPL or a commercial proprietary license which fixes this incompatibility, but the FSF also suggest a remedy whereby a specific exception is added to the QPL allowing differently-licensed software to be treated under the terms of the GPL.

Another common incompatibility issue with FOSS licenses has arisen where projects have wanted to combine GPLv2 code with ASLv2 code. The FSF consider the patent termination and indemnification provisions in ASLv2 to make it incompatible with GPLv2, however they believed these provisions to be a good thing so ensured that GPLv3 was compatible with it.  Indeed, the GPLv3 went on to codify what it meant for another license to be compatible with it.

While this means at first glance that only code explicitly licensed as GPLv3 and ASLv2 can be used together while GPLv2 and ASLv2 cannot, this isn’t necessarily the case.  The FSF encouraged projects to license their code “GPLv2 or later“, in the hope that when future versions of the license were released, they would be encouraged to transition to the new license and in doing so benefit from features such as ASLv2 compatibility.  However, this method of licensing can be interpreted as “GPLv2 with the option to treat it as GPLv3 instead”, meaning that for the purposes of compatibility it can be treated to be GPLv3, while remaining “GPLv2 or later”.

This has the opposite effect of the FSF’s intention by encouraging projects to remain “GPLv2 or later” for the added flexibility it provides while avoiding forcing licensees to be bound by parts of GPLv3 that either party may not like.

While the above trick won’t work for code licensed “GPLv2 only“, a similar thing is possible for code licensed “LGPLv2 only“.  As the LGPLv2 is intended for library code, it contains a clause allowing you to re-license the code to GPLv2 or any later version, in case you wanted to include it in non-library software.  This means that you could, for the purposes of compatibility, treat the code as GPLv3.  The Artistic License 2.0 and EUPL contain similar re-licensing clauses.

What all of this shows us is that while it’s a complex issue, it’s a somewhat artificial one, and there’s all sorts of tricks one can use to circumvent it.  In practice, these compatibility “rules” are rarely followed, and rarely enforced.

In response to this, Richard Fontana suggests that we borrow the idea of “duck typing” from programming to make our lives easier.  If a FOSS project wants to combine some code under the GPL with code under a more permissive, possibly incompatible license, as long as they’re willing to follow the convention of distributing the source as though it was all GPL, the community still gets the benefit without the additional headache of worrying over which bits are allowed to be combined with which.

Are MOOCs Still Going Strong?

Katie Lepi wrote on the advantages, disadvantages and statistics on MOOCs (Massively Open Online Courses) on the website at

The inforgraphics gave a very pictorial representation of these advantages, disadvantages and the statistics.

CS 401 – Software Development Process

This is my first blog post ever. Huzzah.

[What do I expect?]

Entering into my last semester at WSU, I expected my capstone class to be heavily interwoven with the knowledge gained from the other CS courses up to this point. Using all skills and techniques of software design and analysis and my average level of coding, I believe it is going to be a hard but valuable 16 weeks. I think it’s taken too long for us to learn about GitHub, something that should be taught to us very early on so we are masters at using it as the years build up. It’s unfortunate that I am only just beginning to know how to use one of the biggest platforms for version control and I head out into the real world in a couple of months. The same goes for IRC. Having heard about and generally knowing what IRC is for years, I’ve never had a reason to use it so I never bothered with it. Not having the class time to go over it was a shame, but I think I’ll be able to figure it out.


After reading through the articles, it doesn’t exactly add to anything I didn’t already have a grasp on as far as the ideas go for free and open source software. The quote from The Cathedral article, “Too often software developers spend their days grinding away for pay at programs they neither need nor love”, pretty much describes how I feel for a future of software programming.

Free software, as defined by the other articles, was pretty strict in cases where it defined certain things not free, but I guess it has to be in order to promote the truest form of open source software. I think overall it’s a nice premise and cause worth promoting, but in the end it just results in things like where we are with Linux (dozens of distro’s, i.e.). Open source projects should more be used for promoting concepts and ideas as ways to teach people to code better. Grabbing source code to investigate what certain things do and see the inside of the program is a good learning tool, but developing and tweaking each aspect of it and releasing it to the public convolutes everything.

[Git activity]

The entire GitHub activity was confusing and too “hardcore” from what I experienced. The whole manual process of doing it via command line was simply done in a matter of a few minutes with the GUI version. I think in this day and age, to say that you “aren’t a real programmer if you use a GUI/mouse” is too harsh of a constraint that people try to abide by in this field. It’s 2014. There are UI’s to make people’s lives easier and actions simpler, and we should use them. I was able to get up and running using the GUI GitHub install (not the 3rd party site download) and I think it represents it pretty well.

[IRC activity]

I have not done anything other than download an IRC client (HydraIRC: so I am not sure how much I can comment on this. Hopefully we can use it more in depth in the next coming class meetings.

Are Coursera’s Online Specialization Certificates Worth the Cost?

Jamie Littlefield has written an article on "Are Coursera’s Online Specialization Certificates Worth the Cost?" at this website:

1.  Coursera is now offering online “specializations”.  These are certificates from participating colleges that students can use to demonstrate completion of a series of classes.

2.  Coursera is known for offering many online free-to-the-public courses from colleges and organizations.

3.  Now, students can enroll in courses, pay a tuition fee, and earn a specialization certificate.

4.  Certificate options are continuing to grow and include topics such as:
“Data Science” from John Hopkins University,
“Modern Musician” from Berklee, and
“Fundamentals of Computing” from Rice University.

5.  In order to earn a certificate, students take a series of courses and follow a set track in each course.

6.  At the end of the series, students prove their knowledge by completing a capstone project.

Is the cost worth the certification for these new Coursera programs? Jamie Littlefield listed some of the pros and cons:

  1. Specializations allow learners to prove their knowledge to employers.  
  2. New certificates look good in a portfolio
  3. Specializations cost a lot less than college programs. 
  4. Students earn certificates through demonstrating their knowledge
  5. Pay-as-you-go options and financial aid are available
  6. There’s a huge potential for program development.  
  1. Specializations are un-tested. 
  2. Specializations are unlikely to be honored by colleges. 
  3. Students have many other no-cost MOOC options that may be just as good.  
  4. These certificates may be less valuable when compared to other non-accredited training.  
Check out the programs here: Coursera Online Certificates

Jan 27 Class Work and Reading

Well, the classwork from last time was certainly more than the usual busy-work that most first-classes tend to be. Granted, given how rarely this class is planned to meet, such a thing is pretty much understandable. All in all, it’s a shame that we couldn’t finish everything during the class time, as I’m still not completely sure how the Git and IRC things are going to work. I’m pretty sure I either missed something or got lost in the Git activity, as while I could make it work on the online (GUI?) interface, I never got it downloaded to the program on my laptop. Was planning on going through everything again on my own time, to find what I missed, but I never saw the activity document posted… The fact we never got to the IRC part didn’t help my understanding of that, either.

The readings were interesting, if almost predictable, in content. I myself am fully aware that I can tend to overlook things, and it’s a great help to have more people pointing out problems and solutions to you (Cathedral and Bazaar). Heck, even outside of programming, this is a proven concept. I can’t count the number of times the attendant in charge of the shift before mine at my part-time job could not for the life of them find what was making their drawer off, only to have myself (or the next person coming in after me, should I get the first shift) take a look and point out something either misread/misused, or even unaccounted entirely. It was certainly nice to see how such things began evolving and was taken to the extreme with some computer programming. It’s long been known that, where the Internet is concerned, there is little to no privacy and information gets hacked all the time, but to see it used to create such a large project, then turn around and say ‘hey, rather than going to look for information to rip out and use, why don’t we all just open it up and work on it together?’, and turn things around to the point that innovation happens so quickly and so often….. It really is a whole new age that this sort of idea has brought about.

On the second reading, on Free vs Open……. Well, to be honest, it’s really not the type of argument that I generally get in to. From my own understanding, the main difference between the two terms stems from a difference in understanding the intended meanings of the terms. There have been numerous times in my life where I, personally, have had instances where I say or possibly do something and have my meaning be misunderstood. To me, misunderstandings are something that happens pretty much all the time, and are the reason why things should be taken with a ‘grain of salt’, as it were. From my own understanding of the reading, the biggest differences between the two terms are the ‘ideas’ and ‘philosophies’ behind them. Sure, there are some issues about ‘open software’ being restricting in the sense that ‘you can’t freely modify, use, and redistribute it’, but that seems to be a more minor justification to the argument that was presented. The real ‘argument’ is about the ‘freedom’ aspect of both, where people say that ‘open’ software isn’t truly ‘free’ software; and maybe in some cases, they’re actually right, but from what I understood of what I read, they’re also complaining about ‘free’ software that is labeled as ‘open’ and claiming it shouldn’t be called as such, even though the software in question still falls under the qualifications of ‘open’ software. I’m the type who cares more of the results rather than the ‘spirit’, as it were, so the whole thing starts coming off as pointless squabbling when there are other, better things to be doing with one’s time and energy.

As for our final reading work, the OpenMRS Developers Guide, well, it kinda reads like a brochure, but it’s nice to know just what we’re going to be working on. Having some background and information on just what you are helping to accomplish can be a rather great motivator. Now let’s see if we can’t start making differences in the world at large while getting some real life, practical experience in there as well.

2014 e-Learning, Open Education, and Distance Education Conferences

2014 e-Learning, Open Education, and Distance Education Conferences
by Jamie Littlefield

The following are 8 elearning conferences in 2014 that have been highlighted by Jamie Littlefield.

1.  ISTE Conference
ISTE - International Society for Technology in Education
Next Conference: June 28-July 1, 2014 in Atlanta

2.  Educause
Next Conference: September 29–October 2, 2014 in Orlando, Florida

3.  Learning and the Brain
Next conference:  not given

4.  Devlearn
Next Conference: October 29-31, 2014, Las Vegas

5. Open Education for a Multicultural World
Next Conference: April 23-25, 2014 in Ljubljana, Slovenia

6.  Learning Solutions Conference
Next Conference: March 19-21, 2014 in Orlando

7.  Ed Media
Next Conference: June 23-27 in Tampere, Finland

8.  Sloan-C Conferences
Next Conference: varies

What I’m Expecting..?

Well, this is probably shaping up to be the first computer science class the most involved with things to do outside the classroom. Never really had a blog or anything else to that nature before starting this class, so that’s probably going to cause me some headaches.

What am I expecting out of the course itself..? Well, I generally try to go into things with an open outlook, so I can’t really say I’m “expecting” anything. We’ll just have to see how things go. There seems to be a good amount of expected reading to start with, at least.

What I expect to take away from the course..? Probably something along the lines of ‘a better understanding of the greater world of computer science and how programmers and developers get in touch and interact with one another in an open environment’. Well, something like that. I never claim to be the best at putting thoughts into words.

All in all, it’s shaping up to be an interesting experience, all things concerned. Let’s see what we can learn…

Online university courses: godsend or gimmick?

Online university courses are very much in the limelight nowadays, especially those that are called MOOCs (Massively Open and Online Courses).

Check up the web page at this address:

The New York Times declared 2012 as the year of the MOOC.  At the same time, the Open University of UK announced a partnership with 29 British universities to venture into this new area.

But one year later, many people are starting to be their doubts about MOOCs.

For example, in Britain, only 8% of the people surveyed had even heard of MOOCs, according to a Guardian and Open University study.

Rebecca Ratcliffe of wrote about "Online university courses: godsend or gimmick?". This is going to be going to be a series of articles looking at how online courses are meeting (or falling short of) public expectations.

Youtube video on "What is a MOOC?"

There is this short and interesting video clip which explains what a MOOC is all about.

It is presented by Dave Cormier and it is found at:

Here are some summary points from this video clip:

  1. MOOC - Massive Open Online Course
  2. It is a response to information overload.
  3. Information is everywhere.
  4. Course is participatory.
  5. It is distributed in that the information is found everywhere.
  6. It supports lifelong networked learning.
  7. MOOC is not a school.
  8. MOOC allows the participant to connect and collaborate.
  9. It is all about engaged learning.
  10. Participants need not pay to participate in the course.
  11. However, participants may need to pay for some accreditation.
  12. Participants are not expected to submit assignments.
  13. It is all about networked connections.
  14. There is no right way to learn.
  15. MOOC helps to generate new ideas.
  16. MOOC helps to build a distributed knowledge base.
  17. MOOC helps build independence, work in own space and authentic networks after the course finishes.

Support New Features in Thunderbird

thunderbird logo wordmark RGB 300dpi 300x109 Support New Features in ThunderbirdA regular user of Thunderbird has been advocating for a feature in Thunderbird or Enigmail that being the ability to decrypt PGP encrypted messages and save them locally in an unencrypted state. The user who has been advocating for this feature which he feels will enhance the usability of encryption in Thunderbird reached out to the Thunderbird team and I suggested he go the route of crowdfunding the feature.

Thunderbird is the premier open source mail client used by users on everything from Windows to Ubuntu and made by Mozilla and supported by the Mozilla Community.


Support the crowdfunding campaign here by pitching in or taking the request and making it a reality:

Crowdfunding for Mozilla Thunderbird Bug #280588

Crowdfunding for Enigmail Bug #1


Happy International Privacy Day and be sure to follow Thunderbird on Facebook, Google+ and Twitter!

Happy International Privacy Day!

privacy 300x200 Happy International Privacy Day!

Who sees what you search for?

Privacy is an important issue that every user should strive to protect and while privacy means many different things to different people, the fact is that privacy is about real choices being given to a user in regards to how their information (searches, personal information etc) is shared and handled.

I would like to highlight two important things I believe I have contributed to in regards to privacy. Those being advocating for Firefox to remain the default web browser on Ubuntu and also alerting the FSF and EFF about privacy issues in the new Unity Lenses & Scopes. Both organizations were unaware of the new feature but agreed with the privacy concerns I had and helped campaign for changes.

Sadly, in the case of the Unity Lenses & Scopes the privacy issues remain because users are not being given choice or control as a default but instead the decision is made for them by default. I still hope that Canonical (which last year won a Anti-Privacy Award for this privacy fail) will make the right decision for its users.

I think it is important for contributors and users in any open source project to always stand up for values like privacy. I hope more will do this whenever the occasion presents itself in any open source project. Privacy and User Choice are pillars of the open source culture and we should always strive to do our best in respecting both.

Be sure to find and share excellent posts on privacy and share them on Social Media today using hashtag #PrivacyDay


Jeff Osier-Mixon (Jefro)

This blog is not updated very often

As one can tell from the dates – this blog is not entirely abandoned, but somewhat behind the times. If I have an excuse it is because I am going to so many conferences and spending all my time working on the . Please feel free to contact me in person for anything related to the Yocto Project, OpenEmbedded, or embedded Linux in general. (You can find my email address or Google+ persona with a quick search)

Tips for webinars or virtual training

Webinars are conducted by many people everywhere.  However, the difficult part about webinars is really on how to engage the listeners effectively.  Otherwise, the listeners will switch off after a long session of presenting information.

Cathy Moore has written a short article at:


which gives some tips on conducting webinars which engage the listeners.

Useful tips:

1.   Include many thought-provoking questions for people to answer in the chat.  These should be open-ended questions rather than polls or multiple-choic questions.  They’re more like, “Here’s a problem. How do you think we should solve it?” or “Here’s a draft of a solution. What’s wrong with it?”

2.   Talk less often but listen more to your listeners.

3.   Read the chat section and reply to the chat questions. Repeat the questions to the rest of the webinar session as other listeners might not be aware of the questions.

4.   Avoid headaches and reduce development time by creating a presentation without animations or transitions — PDF is usually safe.
5.   Use a headset with a decent microphone, not the computer’s default mic.
6.   Limit sessions to 90 minutes at most.
7.   If you’re providing a handout, make it useful, not just a copy of the PowerPoint slides. You might create a handout that includes the main slides with additional text.
8.   Make the handout available at the beginning or shortly before the presentation, so participants can use it to take notes. If it’s in Word or another easily edited format, they can take notes right in the handout.
Practice giving your presentation, of course, timing yourself and allowing lots of time for the chat.

Avoid the following:

1.   Disallowing the use of the public chat.
2.   Adding “interactivity” by using polls to vote on non-questions, such as “How many people here have seen a boring PowerPoint presentation?”.

3.   Sending people to breakout rooms.

MozStumbler Experience Getting Better

The MozStumbler App is continuing to improve for those who are contributing location data to the Mozilla Location Service which has a goal of building experimental geo-location data based on publicly observable cell tower and wifi access point information. The greatest part about this project is that the app MozStumbler and the platform are open source and there is also FxStumbler for Firefox OS!

So here is what the app looks like today:

Screenshot 2014 01 23 17 52 48 168x300 MozStumbler Experience Getting BetterPretty cool right? Well you can hack on this app too by simply forking the Github repo linked above and contributing to the project and you can also download the latest .apk and install it and run it on your android device. Ultimately I hope other services like OpenSignal or Sensorly might consider contributing some of the data they collect to the Mozilla Location Service and maybe other open source mobile platforms like Ubuntu Touch or Sailfish OS could make their own app and collaborate with the Mozilla community.

Also there was some hint at Mozilla Summit that this project could ultimately turn into an Ingress style game to keep contributors interested.

Interview with Ubuntu Contributor Cody Smith

478623 10150603865143160 1038218845 o 300x225 Interview with Ubuntu Contributor Cody Smith

Cody at a Ubuntu Global Jam in 2013.

I recently reached out to Cody Smith who is a long time contributor to the Ubuntu Oregon LoCo. He has ran an Ubuntu Hour nearly every Friday for last couple of years and his passion for Free Software and getting people involved is contagious.


Benjamin: When did you first start contributing to Ubuntu and what was the motivation?

Cody: I started contributing around 2012 with bug reports, byte-size bug-fixes, and getting more people using Ubuntu (and Linux in general), my motivation was me wanting to give back to such a great community, and be a part of making Ubuntu better in any way I could.

Benjamin: Can you tell me more about yourself?

Cody:  I am a 21-year old college student with a thirst for the knowledge of how the underpinnings of Linux (and pretty much anything I enjoy using) work, I’ve been using Ubuntu since a little after 10.04 was released.
Benjamin: You recently applied for Ubuntu Membership can you tell me how that process was for you?
Cody: I’d be lying if I said the membership process wasn’t a bit nerve-wracking. but all in all, I’d say it went alright, to those looking to apply, make sure you have a decent amount of contributions and give the board as much info on the contributions as possible.
Benjamin: What is your favorite desktop environment?
Cody:  I don’t use a desktop environment more than a plain window manager, that said, I prefer the i3 window manager, fits how I do things perfectly.
Benjamin: What are your plans for contributions to Ubuntu in the future?
Cody:  my plans are to maintain what I currently do (hold Ubuntu Hours, submit bug fixes for betas, and just get people on board with Ubuntu), as well as taking on more challenging bug fixes, there’s just that sense of satisfaction when your bug fix even gets considered for inclusion in the OS, even more so when it’s included.

Get Involved

Getting involved is easy, all you have to do is read our Development Guide, particularly these chapters will help you a lot: Introduction to Ubuntu Development, Getting Set Up and How to fix a bug in Ubuntu. Next…

Find something to work on

We run regular bug fixing initiatives, where you can get started on hand-selected bugs and point out other ways to find bugs to get started with.

Get in touch

There are many different ways to contact Ubuntu developers and get your questions answered. Don’t be shy and get to know us.

  • Be interactive and reach us immediately: talk to us in #ubuntu-motu on

  • Follow mailing lists and get involved in the discussions: ubuntu-devel-announce (announce only, low traffic), ubuntu-devel (high-level discussions), ubuntu-devel-discuss (fairly general developer discussions).

Stay up to date and follow the ubuntudev account on Facebook, Google+, or Twitter.

Our blogs