Teaching Open Source Planet is a Planet, a collection of personal blogs by Teaching Open Source community members working to bring the open source way into academia. We write about our inspirations and experiences in learning, teaching, and collaborating within free and open communities. In the spirit of freedom, we share and criticize in order to collectively improve. We hope you enjoy reading our thoughts; if you’re fascinated by what you see, consider adding your voice to the conversation.
Elizabeth Krumbach Joseph
Elizabeth is a stellar community contributor who has provided solid leadership and mentorship to thousands of Ubuntu Contributors over the years. She is always available to lend an ear to a Community Contributor and provide advice. Her leadership through the Community Council has been amazing and she has always done what is in the best interest of the Community.
Charles is a friend of the Community and long time contributor who is always providing excellent and sensical feedback as we have discussions in the community. He is among a few who will always call it how he sees it and always has the community’s best interest in mind. For me he was very helpful when I first started building communities in Ubuntu and shared his own experiences and how to get through bureaucracy and do awesome.
Michael is a Canonical Employee who started as a Community Contributor and I think of all the employees I have met that work for Canonical it is Michael who has always seemed to be able to balance his role at Canonical and contributing best. He is always fair when dealing with contributors and has an uncanny ability to see things through the Community lenses which I think many at Canonical cannot. I appreciate his leadership on the Community Council.
Thanks again to all those who make Ubuntu one of the best linux distros available for Desktop, Server and Cloud! You all rock!
This past Saturday I co-organized Portland’s first CLSx Event which we had at Mozilla’s Offices and the discussions we had were really great with many centering around barriers to participation and increasing diversity in communities.
We also dived into some great discussion about curating resources available to communities and really picked apart six or so topics from a dozen or so angles and through various lenses of participants.
I have to say it was really impressive to see the level of diversity we had in attendee turnout with a majority of attendees being women and most attendees being from non-tech community backgrounds.
At the end of the event we spent a good 15 minutes discussing improvements for the next CLSxPortland and discussed whether having another event in a few months would be worthwhile. Overall, I think the event was a great success and I think our next CLSx will be even bigger and better.
The other day there was a trivial blog post that came across Planet Ubuntu which proclaimed that a certain LoCo in the Ubuntu Community was no longer going to use the LoCo term because they felt it was offensive in spanish.
I want to point out if there is any confusion around what LoCo means that LoCo means Local Community and is not a spanish word. There is no Ubuntu ENTERLOCALEHERE Loco or loco but only Ubuntu ENTERLOCALEHERE LoCo. If you somehow missed the meaning of this abbreviation, you now know that LoCo is a positive abbreviation and one that has been used by our Local Communities since the inception of the Local Community Program.
That being said, I would encourage people to not get so hung up on words because despite what you think Users, Distros, Linux for Human Beings, Debian are all excellent words to use and the Old Ubuntu Community you know the roots of where this project came from still means a lot to people.
Lets not forget why Ubuntu exists and its roots!
I was really saddened to see Jono Bacon’s post today because it really seems like he still doesn’t get the Ubuntu Community that he managed for years. In fact, the things he is talking about are problems that the Community Council and Governance Boards really have no influence over because Canonical and Mark Shuttleworth limit the Community’s ability to participate in those kind of issues.
As such, we need to look to our leadership…the Community Council, the Technical Board, and the sub-councils for inspiration and leadership.
We need for Canonical to start caring about Community again and investing in things like a physical Ubuntu Developer Summit for contributors to come together and have a really valuable event where they can do work and build relationships that really cannot be built over Google Hangout or IRC alone.
We need these boards to not be reactive but to be proactive…to constantly observe the landscape of the Ubuntu community…the opportunities and the challenges, and to proactively capitalize on protecting the community from risk while opening up opportunity to everyone.
If this is what we need, then Canonical and Mark need to make it so Community Members and Ubuntu Governance have some real say in the project. Sure, right now the Governance Boards can give advice to Canonical or Mark but it should be more than advice. There should be a scenario where the Contributors and Governance are stakeholders.
I will add that one Ubuntu Community Council’s members remark to Jono on IRC about his blog post really made the most sense:
the board have no power to be inspirational and forging new directions, Canonical does
I really like that this council member spoke up on this and I agree with that assessment of things.
I am sure this post may offend some members of these boards, but it is not mean’t too. This is not a reflection of the current staffing, this is a reflection of the charter and purpose of these boards. Our current board members do excellent work with good and strong intentions, but within that current charter. We need to change that charter though, staff appropriately, and build an inspirational network of leaders that sets everyone in this great community up for success. This, I believe will transform Ubuntu into a new world of potential, a level of potential I have always passionately believed in.
Honestly, if this is the way Jono felt then I think he should have been going to bat for the Community and Ubuntu Governance when he was Community Manager because right now the Community and Governance cannot be inspirational leaders because Canonical controls the future of Ubuntu and the Community Council, Governance Boards and Ubuntu Members have very little say in the direction of the project.
I encourage folks to go read Jono’s post and share your thoughts with him but also read the comments in his blog post from current and former members of Ubuntu’s Governance and contributors to Ubuntu. In closing I would like to also applaud the work of the current and former Community Councils and Governance Boards you all do great work!
Today is an important day because today we celebrate a decade of Firefox. Yep, that’s right. Firefox 1.0 was released 10 years ago today. I can’t imagine what the Internet would be like today if Firefox had not existed for the past decade, but I can imagine what the future of Firefox looks like and I think it is a bright future. Every week I hear from users and organizations using Mozilla Firefox and it puts a smile on my face to hear the stories from users who talk about why Firefox is so important to them whether it be Privacy, Security or simply because they support Mozilla’s mission.
I’m glad to be a part of the Mozilla Firefox story and its incredible to be able to be a part of the team that releases Firefox to millions of users each release. I’m really excited to see what the next ten years hold for Mozilla and for Firefox and I think that there is a lot of new ways Mozilla can continue to have a positive impact on the Open Web and help enable younger generations of people to learn the Open Web.
I hope you will get involved in Mozilla’s efforts by being a supporter or contributor and don’t forget you too can celebrate Firefox’s birthday and more information on how to do that can be found here.
I’m really excited to have joined the OpenPOWER Foundation as an individual member (The first Ubuntu member even) just yesterday. I have already started contributing to projects and joined a workgroup of the foundation where I hope to offer my experience around software and hardware.
I think the OpenPOWER Foundation is going to move forward some really important innovation and am looking forward to being part of that.
Here are some good articles and pages that you can learn more about the OpenPOWER Foundation from:
Part 1- SourceForge
When was the most recent change made to the project?
Latest commit is on 2012-01-25
How active is the project? How can you tell?
Not active since the last commit is in 2012
Part 2: Open-hub- Explore Mifos
What is the main programming language used in Mifos?
How many lines of code does Mifos have?
Click on “User & Contributor Locations” (lower right side of screen). List some of the locations of the developers.
Go back to the main Mifos page. Click on the “Languages” link. How many languages is Mifos written in?
Java, XML, PHP, Other
I want to congratulate the Ubuntu Teams on releasing another solid release of Ubuntu. I would like to take a moment to encourage those installing and upgrading to Ubuntu 14.10 Utopic Unicorn to enable Telemetry and Firefox Health Report on Firefox.
If you are feeling adventurous, we can always use testers of Firefox Nightly not only on Ubuntu but really across all Linux distros and enabling Telemetry and also enabling e10s (Electrolysis) will help us deliver a fast and better Firefox with each release!
Instructions for installing Firefox Nightly on:
Keep rocking Free Software!
fOSSa 2014 will be in Rennes, France, on 19, 20 and 21 November 2014.
This year the event has three themes:
For more information and to register visit the fOSSa website.
Sadly I won’t be able to make it this year, which is a shame as its a great event with lots of interesting topics.
This week saw the launch of KAIYUANSHE (开源社), an association comprising both companies and universities with the aim of providing developers in China with education, tools and services to foster a healthy and robust open source ecosystem.
KAIYUANSHE from the outset is working through two core programs. The first, Open Source Star, helps software developers apply an open source license to their projects, and specifically recognize those that use one of the several available OSI-approved licenses.
The second program is called Open Source Ambassadors. Through this program, the alliance aims to recognize individuals and organizations who are actively engaged in community efforts, for their work to champion best practices and collaboration.
At OSS Watch here at the University of Oxford we’ve also been collaborating with the new initiative, providing access to our content and tools so that they can be localised and translated. You can find Chinese versions of some of our briefing notes on the KAIYUANSHE website already, and I’m sure more will soon follow.
Initial members of the association include Ubuntu Kylin, Microsoft Open Technologies, GitCafe, CSDN and Mozilla. For more information visit the KAIYUANSHE website.
Last weekend I organised the first OggCamp to be held in Oxford. OggCamp is an annual free culture unconference, where 300 people with a variety of interests related to open source, open hardware, creative commons and more meet up to share projects, ideas and experience.
As an unconference, the vast majority of the scheudle is decided on the day. This means that we never really know what’s going to happen, but we always have a great range of interesting talks, and this year was no different. Talks this year included a demo of a hydrogen-powered Raspberry Pi, the beginnings of a project to create an open source wireless presentation dongle, software-defined radio, and several live podcast recordings.
Alongside our 3 presentation tracks, we had a fantastic exhibition hosting stands from the events sponsors as well as a number of local hackspaces. Projects being showed off included a vintage teletype connected to Twitter, an open source CNC router, a home heating automation system, and a persistance-of-vision display using a bike wheel.
The result of all of this was a fantastic weekend full of fun an inspiration. Next year’s event isn’t in the works yet, but I’m already excited for next time.
I wanted to share some tips I have for running events. In the last seven years or so, I have run events that were small meetup style events all the way up to conferences where accountability and planning spanned dealing with thousands of attendees and a large team of volunteers to get work done. Here are some of the best practices I have learned through experience or from other event planners.
As an event organizer, one of the most important responsibilities you have is communicating regular updates to those who volunteer or are on your team that is supporting your event. That means keeping a roster of those who have offered to help and sending out high level updates on a regular basis. Additionally, it is the responsibility of an organizer to ensure that each person who has offered to help is giving instructions on their task and knows the deadline for their deliverables.
Don’t expect volunteers to come to you but instead make communication a part of your workflow for the event and better yet update the community, project or company that is associated with the event so everyone knows the progress of the event.
Keeping a roster is one good way of documenting who your event supporters are but it is also good to have a master plan highlighting all the tasks that need completion in order to make your event a success, who the owner of each task is, and what the current status is and when the task is due.
There is nothing worse than being short in your budget so always give yourself a little bit of padding after you have listed all your expected costs that way if some failure happens for a purchase not arriving or something to that effect you can go ahead and purchase that item the day of the event and have budget to cover it.
Be sure to regularly thank your team for the hard work they are putting in to make the event a success. In the case of working with a team of volunteers, they choose to be there and so recognizing their daily work and praising them will make them feel good about that work. Recognition will also increase likelihood of future volunteering for events.
Make sure that you plan some fun events for not only your team but for event participants in order to make sure the event is fun and not just work. In the case of your team, you could have an icebreaker activity and team dinner and for participants you could offer a mixer on the first night of your event.
Do go through your contacts and let people who might be interested know about the event. Use mailing lists, Facebook, Twitter, Google+ and perhaps even sites like Lanyrd. Engage local meetups and coworking spaces and tell them about your upcoming event.
At the very minimum, you should start promoting your event four weeks before it happens and continually promote it until the last day of the event or the day of in the case of a single day event.
Be sure to have people introduce themselves and you should also go out an try and meet every participant and find out what they do and what they are interested in. The hallway track of any event can be one of the most valuable experiences for any attendee.
Create a Facebook Group, mailing list or some other way of keeping people connected this will help create a community around your event that will help it grow in the future and help it continue for many future iterations. Also be sure to encourage attendees and the event team to share out memorable photos on social media so your even reaches those who couldn’t attend.
Do you have any tips for planning and running an awesome event? Share them below in the comments!
I wanted to post an update to my post Sabbatical Reading List.
To better keep track of my progress with my reading, I’ve started tagging the books on my LibraryThing account in different categories:
Back in August Wikimania came to London and I heard some interesting discussion there of Wikipedia’s approach to open access materials and the tools they are developing to support that approach. This github repo contains some interesting open source projects designed mainly to automate the process of identifying cited external resources that can be copied into Wikipedia’s repositories of supporting material wikisource (for texts) and upload.wikimedia.org (for pictures, video and sound).
open-access-media-importer for example is a tool which searches the online repository of academic biology papers PubMed for media files licensed under the Creative Commons attribution licence and copies them into the wikimedia repository. Where the files are in media formats that are encumbered by patents, the script also attempts to convert them to the patent free ogg format framework.
In the same github repo is the OA-Signalling project presents a developing framework for flagging open access academic papers using standardised metadata, perhaps integrated in future with the systems being developed by DOAJ and CrossRef. This wikipedia project page explains further:
Some automated tools which work with open access articles are already created. They impose nothing upon anyone who does not wish to use them. For those who wish to use them, they would automate some parts of the citation process and make an odd Wikipedia-specific citation which, contrary to academic tradition, notes whether a work is free to read rather than subscription only. The tools also rip everything usable out of open access works, including the text of the article, pictures or media used, and some metadata, then places this content in multiple Wikimedia projects including Wikimedia Commons, Wikisource, and Wikidata, as well as generating the citation on Wikipedia.
During the sessions in which open access and these tools were discussed, many participants expressed strong dislike for academic publishers and their current closed practices. Clearly for many the idea that Wikipedia could become the de facto platform for academic publication was a charming idea, and more open access was seen as the best route to achieving this.
Many years ago I worked in a digital archive, and one of the problems we faced was that academics who were depositing their databases and papers wanted to be able to revise them and effectively remove the earlier, unrevised versions. Naturally this made our jobs more challenging, and to a certain extent seemed to be opposed to the preservation role of the archive. My experiences there make me wonder how the same academics would react to their papers being hoovered up by Wikipedia, potentially to become unalterable ‘source’ copies attached to articles in the world’s most used reference work. On the one hand it is a great practical application of the freedoms that this particular kind of open access provides. On the other hand, it perhaps risks scaring authors into more conservative forms of open access publication in the future. Personally I hope that academics will engage with the tools and communities that Wikipedia provides, and handle any potential friction through communication and personal engagement. And in the end, as these tools are open source, they could always build their own hoover.
We change the world with millions of tiny patches… our world of open technology and culture is built one patch, one line, one edit at a time — and that’s precisely why it’s powerful. It brings billions of tiny, ordinary moments together to transform the world. If we teach it for our code, we can preach it for our giving. If you’d buy me a drink, or treat an open source newcomer to dinner, send that $3-$20 to the Ada Initiative tonight. –August 30, 2013
Why do we need to do this? Well, being a woman in open technology and culture is like riding a bike on a street made for cars, where rain and dirt get kicked into your face, and you are constantly, painfully aware that if you have any sort of collision with a car… the car will win. Yes, this is happening in our world, to our friends and to our colleagues; it’s happened to me personally more times than I care to remember. The farther you are from the straight white male difficulty setting, the rougher the terrain becomes.
And quite honestly, we’re busy. I’m busy. You’re busy. This isn’t our job — we have so many other things to do. I mean, we’re all:
And guess what? There are so many people who want to join us. So many people who want to help us do all this work, but don’t, because they know that work — the good work — is likely to come with a lot of really, really awful stuff, like this sampling of incidents since last year (trigger warning: EVERYTHING).
The less time women spend dealing with that stuff, the more time they have to help us with our work. And the more people will want to help us with our work. I mean, would you want to accept a job description that included the item “must put up with demeaning harassment and sexual jokes at any time, with no warning, up to 40+ hours per week”?
Making our world a good environment for all sorts of people is, in fact, our job — or at least part of it. The folks at the Ada Initiative have made supporting women in open tech/culture their entire job — supporting it, supporting people who support it, and basically being the equivalent of code maintainers… except instead of code, the patches they’re watching and pushing and nudging are about diversity, inclusion, hospitality, and just plain ol’ recognition of the dignity of human beings.
They want to support you. With better conference environments, training workshops and materials, and really awesome stickers, among many other things. (Did you know that the Ada Initiative was one of the first woman-focused tech organizations to actually say the word “feminism”?)
So please, donate and support them, so they can support you — and me, and all of us — in supporting women in open tech/culture.
Now, my own contribution is a bit… sparse, financially. I’m a grad student earning less than $800 a month, and I’m waiting for my paycheck to come in so I can contribute just a few dollars — but every little bit helps. And there’s another way I can help out: I can bribe you, dear readers, to donate.
Remember that “active vs reflective” learning styles post I wrote in August? Well, there are 3 more: sensing/intuitive, visual/verbal, and global/sequential. I’ve got them all transcribed here and ready to go. And if we reach $1024 in donations to the Ada Initiative under the Learning Styles campaign within the next week, I will release them under a creative-commons license.
What’s more: the first 3 people who donate $128 or more to this campaign and email me their receipt will get a free 1-hour Skype call with me to discuss their personal programming learning styles, and will be featured as case studies on one of those three posts (I’ll link to your website and everything).
The Free Software and Open Source Symposium (FSOSS) 2014 is around the corner, and it's shaping up to be the best in years. We have well over 30 talks spread over 2 days, covering just about every corner of open source from new and upcoming technologies through business models. We have a keynote from my colleague David Humphrey examining the implications of Heartbleed, as well as keynotes from Chris Aniszczyk (Twitter) and Bob Young (Lulu/Red Hat/TiCats). There are speakers from Canada, the US, Hungary, the UK, Cuba, and India, representing open source communities, academia, entrepreneurs, startups, and companies such as Mozilla, Cisco, AMD, Red Hat, and Rackspace.
Until October 10, registration for this event is just $40 (or, for students and faculty of any school, $20), which includes access to all of the keynotes, talks, and workshops, two lunches, a wine/beer/soft drink reception, a t-shirt, and swag.
Full details can be found at fsoss.ca -- see you October 23/24!
We’ve decided to change the way we publish our newsletter, so instead of having a separate site over at http://newsletter.oss-watch.ac.uk, from now on we’ll be posting a monthly round-up of our activities on this blog. If you’re only interested in these round-ups, you can subscribe to the feed for the Newsletter category. We’ll still be publishing event reports, analysis and opinion pieces on this blog as before.
This month is a bumper edition covering what we’ve been up to over the summer. With Kuali announcing its move to a company-based governance model, Scott has looked at whether this means the end of “community-source”, and whether its choice of an AGPL license poses a risk of vendor lock-in.
We’ve also continued our work with the VALS project, helping over 60 FOSS organisations submit over 250 project ideas. The participating universities have now signed up for the programme, and students are submitting their project proposals.
Finally for this month, Mark attended the first AGM of the Research Software Engineers UK group, who are seeking to champion and support software developers working with researchers.
Following up from my previous post on my experience with Coursera, here are a few links of interest (mostly) relating to online education, with a focus on “competency-based education”, i.e., education directed specifically at teaching people to become competent at one or more tasks or disciplines:
“Hire Education: Mastery, Modularization, and the Workforce Revolution” (Michelle Weise and Clayton Christensen). Clayton Christensen is famous for his theory of “disruptive innovation”, which I think is useful not so much as a proven theory but rather as a way to structure plausible narratives about business success or failure. When Christensen fails in his predictions it’s usually because he doesn’t pay attention to things that don’t fit neatly into his preferred narratives. For example, he and co-author Michael Horn previously hyped for-profit education companies and failed to see that for many of them actually educating students was not the point. Rather those companies identified a “head I win, tails you lose” business proposition in “chasing Title IV money [i.e., government-subsidized student loans] in a federal financial aid system ripe for gaming”. This represents a second try by Christensen and his associates to forecast the future of post-secondary education.
“The MOOC Misstep and the Open Education Infrastructure” (David Wiley). One of Clayton Christensen’s blind spots is that he tends to overlook what’s going on in the area of not for profit endeavors. In his blog “Iterating toward Openness” David Wiley covers the general area of open educational resources (or OER); this post is a good introduction to his thinking.
Web Literacy Map (Mozilla project). A real-world example of the sort of competency-based open education initiative that Wiley’s promoting. See also the Open Badges project, a Mozilla-sponsored initiative to create an open infrastructure for granting and publishing credentials.
A Smart Way to Skip College in Pursuit of a Job (Eduardo Porter for the New York Times). “Nanodegrees” are online education provider Udacity’s own take on competency-based education, created in cooperation with major employers.
“Missing Links: How Coding Bootcamps Are Doing What Higher Ed and Recruiting Can’t” (Robert McGuire for SkilledUp). You may be beginning to see a trend here: A lot of the action in competency-based training is around software development, data science, and related fields. That’s because there’s high demand for skilled employees in certain fields and a lack of truly-focused traditional educational offerings to meet that demand. A related trend: Sites like SkilledUp that are trying to be become trusted guides to these new-style offerings.
Last but not least, here are some other people’s reviews of the Johns Hopkins Data Science Specialization courses on Coursera that I’m currently taking:
From a local point of view these changes (if indeed they continue and are amplified) are not likely to affect high-end universities like Johns Hopkins; they’ll survive based on their ability to select the most talented applicants and plug them into a set of networks that will maximize their chances of success.1 The question is rather how they’ll affect institutions like Howard Community College that serve a broader student population that’s looking to acquire job-relevant skills.
1. Note that from this point of view online offerings like the John Hopkins Data Science Specialization help to promote the institution and identify potential applicants. In fact, just this week I received an email from the Bloomberg School of Public Health inviting me to attend one of their “virtual info sessions” for people considering applying.
Some exciting new updates for node as one of the largest package managing systems on node, npm is pushing out a large upgrade with its newest npm 2.0.0 version that adds some new great features. I have been experimenting with node for a little while with katelibby an irc bot, and beaubot a bot for twitter and they each rely more or less nearly entirely on several npm packages for their core functionality.
The most notable changes is with scoped packages, which was implemented earlier this season. Historically scoped packages has meant that with the Carrot operator (^) you can assign specific versions of dependencies to use within your package now with 2.0.0, npm now allows name spaces for personal registries, by using an @ sign you can create a registry which can consist of multiple scoped packages internal to that registry. This can allow you to be logged into multiple registries and keep your private non-main registries up to-date. npm now uses token based authentication and credentials can be shared between multiple scoped packages with peer dependencies inside of the registry now. These upgrade are probably more useful for enterprise companies using their own large private code bases, a practical demonstration of the changes could be thought of as two different versions of Grunt, one where features which have be depreciated in an older version, could be used in combination with the newest release version. Also in the 2.0.0 version they improved reliability, fixed a number of race conditions, bugs and dependency issues. But its good to see npm getting enterprise level features in its public release.
This past week marked the end of Maker Party 2014. The results are well beyond what we expected and what we did last year — 2,513 learning events in 86 countries. If you we’re one of the 5,000+ teachers, librarians, parents, Hivers, localizers, designers, engineers and marketing ninjas who contributed to Webmaker over the past few months, I want to say: Thank you! You did it! You really did it!
What did you do? You taught over 125,000 people how to make things on the web — which is the point of the program and an important end in itself. At the same time, you worked tirelessly to build out and expand Webmaker in meaningful ways. Some examples:
It’s important to say: these things add up to something. Something big. They add up to a better Webmaker — more curriculum, better tools, a larger network of contributors. These things are assets that we can build on as we move forward. And you made them.
You did one other thing this summer that I really want to call out — you demonstrated what the Mozilla community can be when it is at its best. So many of you took leadership and organized the people around you to do all the things I just listed above. I saw that online and as I traveled to meet with local communities this summer. And, as you did this, so many of you also reached out an mentored others new to this work.You did exactly what Mozilla needs to do more of: you demonstrated the kind of commitment, discipline and thoughtfulness that is needed to both grow and have impact at the same time. As I wrote in July, I believe we need simultaneously drive hard on both depth and scale if we want Webmaker to work. You showed that this was possible.
So, if you were one of the 5000+ people who contributed to Webmaker during Maker Party: pat yourself on the back. You did something great! Also, consider: what do you want to do next? Webmaker doesn’t stop at the end of Maker Party. We’re planning a fall campaign with key partners and networks. We’re also moving quickly to expand our program for mentors and leaders, including thinking through ideas like Webmaker Clubs. These are all things that we need your help with as we build on the great work of the past few months.
Last Monday I attended the first (hopefully of many!) AGM of The UK Community of Research Software Engineers. The group has been formed to champion the cause of software engineers producing software of research, be they developers who are embedded in research groups, or academics who have found themselves developing and maintaining software. Throughout the day, there were a number of issues debated by the group.
While the career path for academics hinges on them publishing papers, developers contributing to research through their work often find that they dont get the opportunity to publish. One of the problems that RSE seeks to address is finding an alternative way of universities giving recognition to the contribution of software engineers to research.
Should universities seek to support development of research software centrally, or is it better done in departments? At UCL, they’ve formed a central group of developers, partly from core funding and partly from project funding, who can provide development effort to research projects. While this provides a useful core of development expertise, a central service can’t provide the same level of domain-specific knowledge that some research groups will require, and some institutions simply don’t have the skills base in central IT to provide the development support that researchers would find valuable.
Another approach for central support for research software engineers is to provide training and tools to support good software engineering practice. Version control, continuous integration and other common tools can be instilled in researchers’ workflows through collaboration with experienced developers, or through training initiatives such as Software Carpentry. Provisioning systems like GitLab and Jenkins centrally provides easy access to infrastructure which supports these practices.
These issues and more were discussed in groups over the day, and will continue to be discussed by the RSE community. If you’re a research software engineer, or just want to help champion their cause, you can visit the website and join the discussion group.
Dipping into Julian Orr’s Talking about Machines, an ethnography of Xerox photocopier technicians, has set off some light bulbs for me.
First, there’s Orr’s story: Orr dropped out of college and got drafted, then worked as a technician in the military before returning to school. He paid the bills doing technical repair work, and found it convenient to do his dissertation on those doing photocopy repair.
Orr’s story reminds me of my grandfather and great-uncle, both of whom were technicians–radio operators–during WWII. Their civilian careers were as carpenters, building houses.
My own dissertation research is motivated by my work background as an open source engineer, and my own desire to maintain and improve my technical chops. I’d like to learn to be a data scientist; I’m also studying data scientists at work.
Further fascinating was Orr’s discussion of the Xerox technician’s identity as technicians as opposed to customers:
The distinction between technician and customer is a critical division of this population, but for technicians at work, all nontechnicians are in some category of other, including the corporation that employs the technicians, which is seen as alien, distant, and only sometimes an ally.
It’s interesting to read about this distinction between technicians and others in the context of Xerox photocopiers when I’ve been so affected lately by the distinction between tech folk and others and data scientists and others. This distinction between those who do technical work and those who they serve is a deep historical one that transcends the contemporary and over-computed world.
I recall my earlier work experience. I was a decent engineer and engineering project manager. I was a horrible account manager. My customer service skills were abysmal, because I did not empathize with the client. The open source context contributes to this attitude, because it makes a different set of demands on its users than consumer technology does. One gets assistance with consumer grade technology by hiring a technician who treats you as a customer. You get assistance with open source technology by joining the community of practice as a technician. Commercial open source software, according to the Pentaho beekeeper model, is about providing, at cost, that customer support.
I’ve been thinking about customer service and reflecting on my failures at it a lot lately. It keeps coming up. Mary Gray’s piece, When Science, Customer Service, and Human Subjects Research Collide explicitly makes the connection between commercial data science at Facebook and customer service. The ugly dispute between Gratipay (formerly Gittip) and Shanley Kane was, I realized after the fact, a similar crisis between the expectations of customers/customer service people and the expectations of open source communities. When “free” (gratis) web services display a similar disregard for their users as open source communities do, it’s harder to justify in the same way that FOSS does. But there are similar tensions, perhaps. It’s hard for technicians to empathize with non-technicians about their technical problems, because their lived experience is so different.
It’s alarming how much is being hinged on the professional distinction between technical worker and non-technical worker. The intra-technology industry debates are thick with confusions along these lines. What about marketing people in the tech context? Sales? Are the “tech folks” responsible for distributional justice today? Are they in the throws of an ideology? I was reading a paper the other day suggesting that software engineers should be held ethically accountable for the implicit moral implications of their algorithms. Specifically the engineers; for some reason not the designers or product managers or corporate shareholders, who were not mentioned. An interesting proposal.
Meanwhile, at the D-Lab, where I work, I’m in the process of navigating my relationship between two teams, the Technical Team, and the Services Team. I have been on the Technical team in the past. Our work has been to stay on top of and assist people with data science software and infrastructure. Early on, we abolished regular meetings as a waste of time. Naturally, there was a suspicion expressed to me at one point that we were unaccountable and didn’t do as much work as others on the Services team, which dealt directly with the people-facing component of the lab–scheduling workshops, managing the undergraduate work-study staff. Sitting in on Services meetings for the first time this semester, I’ve been struck by how much work the other team does. By and large, it’s information work: calendering, scheduling, entering into spreadsheets, documenting processes in case of turnover, sending emails out, responding to emails. All important work.
This is exactly the work that information technicians want to automate away. If there is a way to reduce the amount of calendering and entering into spreadsheets, programmers will find a way. The whole purpose of computer science is to automate tasks that would otherwise be tedious.
Eric S. Raymond’s classic (2001) essay How to Become a Hacker characterizes the Hacker Attitude, in five points:
There is no better articulation of the “ideology” of “tech folks” than this, in my opinion, yet Raymond is not used much as a source for understanding the idiosyncracies of the technical industry today. Of course, not all “hackers” are well characterized by Raymond (I’m reminded of Coleman’s injunction to speak of “cultures of hacking”) and not all software engineers are hackers (I’m sure my sister, a software engineer, is not a hacker. For example, based on my conversations with her, it’s clear that she does not see all the unsolved problems with the world to be intrinsically fascinating. Rather, she finds problems that pertain to some human interest, like children’s education, to be most motivating. I have no doubt that she is a much better software engineer than I am–she has worked full time at it for many years and now works for a top tech company. As somebody closer to the Raymond Hacker ethic, I recognize that my own attitude is no substitute for that competence, and hold my sister’s abilities in very high esteem.)
As usual, I appear to have forgotten where I was going with this.
Today Jisc announced the beta G4HE website. The site pulls data from the BIS-funded RCUK Gateway to Research API and provides an interface to allow searching and visualising data on research in the UK.
For example, you can see at a glance the councils funding research at the University of Oxford, as well as the key collaboration partners in joint research work
There are several interesting “open” angles to this project.
First, its ‘open’ in the sense that the site is opening up access to information about research spending.
Second, the site is using crowdsourcing to clean up the available data to make it more meaningful – for example by asking visitors to help identify duplicates and naming mistakes from the original data.
This is my first transatlantic trip ever and perhaps my longest flight so far, so I’m both excited and honestly a bit anxious since I generally do not love flying. But I am even more happy to be able to go and sync up with other leaders from the Mozilla Community and most notably we will be spending a day with Mitchell Baker , Mark Surman and Mary Ellen.
This will also be my first ReMo Camp so I’m not entirely sure what to expect but I will be in good company with fellow contributors who I have done meetups with before and engage with often so it will be great to see familiar and friendly faces. I always say meeting up with Mozillians is like meeting up with family because we are really so tight knit.
On the first day I will hope to spend most of it gallivanting around Berlin and its suburbs visiting a huge list of tourist spots I want to visit to take photos. Of course I will also take the opportunity to meet up with some Wikipedians, Ubuntu Users and others who I made arrangements with to meet in advance of the trip.
Stay tuned for photos!
The Web has been filled with buzz of the news of new Android watches and the new Apple Watch but I’m still skeptical as to whether these first iterations of Smartwatches will have the kind of sales Apple and Google are hoping for.
I do think wearable tech is the future. In fact, I owned a Pebble, Qualcomm Toq and Fitbit Flex and the thing that is the most valuable device is probably the Fitbit because it has solid features while the Pebble still needs useful apps as does the Qualcomm Toq.
I think as nice as these devices look and as pleasing as they are to consumers, that ultimately the price tag is not worth the feature list but if they become as powerful as a smartphone and have the same amount of apps then I think they will catch on.
That being said, I do not think smartwatches are anything but buzz right now regardless of what OS is running on them whether it be Android, iOS, Firefox OS, Ubuntu or others. I think we still need a couple years before this technology will be on par with smartphones and tablets and at a reasonable price point.
I felt like I had to write something on this after reading this article on OMG Ubuntu on the possibility of a Ubuntu powered smartwatch.
Chuck Severance recently published a post entitled How to Achieve Vendor Lock-in with a Legit Open Source License – Affero GPL where he criticises the use of AGPL licenses, particularly its use – or at least, intended use – by Kuali. Chuck’s post is well worth reading – especially if you have an interest in the Kuali education ERP system. What I’m going to discuss here are some of the details and implications of AGPL, in particular where there are differences between my take on things and the views that Chuck expresses in his post.
Copyleft licenses such as GPL and AGPL are more restrictive than the so-called permissive licenses such as the Apache Software License and MIT-style licenses. The intent behind the additional restrictions is, from the point of view of the Free Software movement, to ensure the continuation of Free Software. The GPL license requires any modifications of code it covers to also be GPL if distributed.
With the advent of the web and cloud services, the nature of software distribution has changed; GPL software can – and is – used to run web services. However, using a web service is not considered distributing the software, and so companies and organisations using GPL-licensed code to run their site are not required to distribute any modified source code.
Today, most cloud services operate what might be described as the “secret source” model. This uses a combination of Open Source, Free Software and proprietary code to deliver services. Sometimes the service provider will contribute back to the software projects they make use of, as this helps improve the quality of the software and helps build a sustainable community – but they are under no obligation to do so unless they actually choose to distribute code rather than use it to run a service.
The AGPL license, on the other hand, treats deployment of websites and services as “distribution”, and compels anyone using the software to run a service to also distribute the modified source code.
AGPL has been used by projects such as Diaspora, StatusNet (the software originally behind Identi.ca – it now uses pump.io), the CKAN public data portal software developed by the Open Knowledge Foundation, and MIT’s EdX software.
[UPDATE 20 September 2014: EdX has since relicensed its AGPL component under the Apache License]
We’ve also discussed before on this blog the proposition – made quite forcefully by Eben Moglen – that the cloud needs more copyleft. Moglen has also spoken in defence of the AGPL as one of the means whereby Free Software works with cloud services.
So where is the problem?
The problem is that the restrictions of AGPL, like GPL before it, can give rise to bad business practice as well as good practice.
In a talk at Open World Forum in 2012, Bradley Kuhn, one of the original authors of AGPL, reflected that, at that time, some of the most popular uses of AGPL were effectively “shakedown practices” (in his words). In a similar manner to how GPL is sometimes used in a “bait and switch” business model, AGPL can be used to discourage use of code by competitors.
For example, as a service provider you can release the code to your service as AGPL, knowing that no-one else can run a competing service without sharing their modifications with you. In this way you can ensure that all services based on the code have effectively the same level of capabilities. This makes sense when thinking about the distributed social networking projects I mentioned earlier, as there is greater benefit in having a consistent distributed social network than having feature differentiation among hosts.
However, in many other applications, differentiation in services is a good thing for users. For an ERP system like Kuali, there is little likelihood of anyone adopting such a system without needing to make modifications – and releasing them back under AGPL. It would certainly be difficult for another SaaS provider to offer something that competes with Kuali using their software based on extra features, as any improvements they could make would automatically need to be shared back with Kuali anyway. They would need to compete on other areas, such as price or support options.
But back to Chuck’s post – what do we make of the arguments he makes against AGPL?
If we look back at the four principles of open source that I used to start this article, we quickly can see how AGPL3 has allowed clever commercial companies to subvert the goals of Open Source to their own ends:
- Access to the source of any given work – By encouraging companies to only open source a subset of their overall software, AGPL3 ensures that we will never see the source of the part (b) of their work and that we will only see the part (a) code until the company sells itself or goes public.
- Free Remix and Redistribution of Any Given Work – This is true unless the remixing includes enhancing the AGPL work with proprietary value-add. But the owner of the AGPL-licensed software is completely free to mix in proprietary goodness – but no other company is allowed to do so.
- End to Predatory Vendor Lock-In – Properly used, AGPL3 is the perfect tool to enable predatory vendor lock-in. Clueless consumers think they are purchasing an “open source” product with an exit strategy – but they are not.
- Higher Degree of Cooperation – AGPL3 ensures that the copyright holder has complete and total control of how a cooperative community builds around software that they hold the copyright to. Those that contribute improvements to AGPL3-licensed software line the pockets of commercial company that owns the copyright on the software.
On the first point, access to source code, I don’t think there is anything special about AGPL. Companies like Twitter and Facebook already use this model, opening some parts of their code as Open Source, while keeping other parts proprietary. Making the open parts AGPL makes a difference in that competitors also need to release source code, so I think overall this isn’t a valid point.
On the second point, mixing in other code, Chuck is making the point that the copyright owner has more rights than third parties, which is unarguably true. Its also true of other licenses too. I think its certainly the case that, for a service provider, AGPL offers some competitive advantage.
Chuck’s third point, that AGPL enables predatory lock-in, is an interesting one. There is nothing to prevent anyone from forking an AGPL project – it just has to remain AGPL. However, the copyright owner is the only party that is able to create proprietary extensions to the code without releasing them, which can be used to give an advantage.
However, this is a two-edged sword, as we’ve seen already with MySQL and MariaDB; Oracle adding proprietary components to MySQL is one of the practices that led to the MariaDB fork. Likewise, if Kuali uses its code ownership prerogative to add proprietary components to its SaaS offering, that may precipitate a fork. Such a fork would not have the ability to add improvements without distributing source code, but would instead have to differentiate itself in other ways – such as customer trust.
Finally, Chuck argues that AGPL discourages cooperation. I don’t think AGPL does this any more than GPL already does for Linux or desktop applications; what is new is extending that model to web services. However, it certainly does offer less freedom to its developer community than MIT or ASL – which is the point.
In the end customers do make choices between proprietary, Open Source, and Free Software, and companies have a range of business models they can operate when it comes to using and distributing code as part of their service offerings.
As Chuck writes:
It never bothers me when corporations try to make money – that is their purpose and I am glad they do it. But it bothers me when someone plays a shell game to suppress or eliminate an open source community. But frankly – even with that – corporations will and should take advantage of every trick in the book – and AGPL3 is the “new trick”.
As we’ve seen before, there are models that companies can use that take advantage of the characteristics of copyleft licenses and use them in a very non-open fashion.
There are also other routes to take in managing a project to ensure that this doesn’t happen; for example, adopting a meritocratic governance model and using open development practices mitigates the risk of the copyright owners acting against the interests of the user and developer community. However, as a private company there is nothing holding Kuali to operate in a way that respects Free Software principles other than the terms of the license itself – which of course as copyright owner it is free to change.
In summary, there is nothing inherently anti-open in the AGPL license itself, but combined with a closed governance model it can support business practices that are antithetical to what we would normally consider “open”.
Choosing the AGPL doesn’t automatically mean that Kuali is about to engage in bad business practices, but it does mean that the governance structure the company chooses needs to be scrutinised carefully.
The last three months or so I’ve been in school (which is why I haven’t been posting as much lately). Not a real bricks-and-mortar school—I’ve been participating in the “Data Science Specialization” series of online courses created by faculty at the Johns Hopkins Bloomberg School of Public Health and offered by Coursera, a startup in the online education space. It’s been an interesting experience, and well worth a blog post.
The obvious first question is, why I am doing this? Mainly because I thought it would be fun. I was an applied mathematics (and physics) major in college, enjoyed the courses I had in probability, statistics, stochastic processes, etc., and wanted to revisit what I had learned and (for the most part) forgotten. It’s one of my hobbies—a (bit) more active one than watching TV or reading. Also, I’ve done some minor fiddling about with statistics on the blog (for example, looking at Howard County election data), am thinking about doing some more in the future, and wanted to have a better grounding in how best to do this. Finally, “data scientist” is one of the most hyped job categories in the last few years, and even though I probably won’t have much occasion to use this stuff in my current job it certainly can’t hurt to learn new skills in anticipation of future jobs.
The next question is, why an online course? Because I didn’t have the time (or the money) to commit to attending an in-person class, but I wanted the structure that a formal class provides. I’ve been (re)learning linear algebra out of a textbook for over four years now, and I still haven’t gotten past chapter 3. Part of the reason is that I’m doing every exercise and blogging about it, but mainly it’s that I don’t have an actual deadline to finish my studies. In the Coursera series there are nine courses, each lasting a month, with quizzes every week and course projects every 2-4 weeks depending on the course. I’ve been doing pretty well in the courses thus far and don’t want to spoil my record. For example, the first project in the current class was due Sunday but I was concerned about missing the deadline and so finished it last Friday night.
I like the way the series of courses is structured as well, not just as a class in statistics (only) but covering the whole range of skills needed to wrangle with data in its various forms, not least including the problems of getting datasets and cleaning them up. Each class thus far has only been a month long, so the time commitment is not that great and I know any work I do today will pay off in a completed course not too far down the road. It is a fairly serious commitment of time though, especially since the course video lectures cover only a fraction of what you need to know in order to do the course projects and correctly answer the more difficult quiz questions. I’ve probably spent almost 10 hours each week working on various aspects of the classes, including doing a copious amount of Internet searching to find out the additional information I need. But it’s been time well-spent: I feel like I’m getting a good understanding of how to do “data science” tasks—not that I know everything, but I have a much better picture of what I need to know, and what it would take to finish learning it.
The course I’m currently taking (“Exploratory Data Analysis”), like the others in the series, is what’s been referred to as a MOOC, or “massive open online course”, open at no charge to anyone in the world who wants to participate over the Internet. The instructors provide video lectures and create the quizzes and class projects but are not otherwise directly involved; the students provide help to each other in online discussion forums, assisted by “community TAs”, i.e., former students who volunteer as teaching assistants. MOOCs have recently been the subject of both hype and caution; now that I’ve been involved in them day-to-day I can provide a personal perspective on the controversy.
First, I think MOOCs are good for the sort of people who invented them in the first place: Internet-savvy folks with a technological bent who are motivated to learn something and have the necessary free time and background experience and knowledge to do so effectively. I’ve certainly appreciated having convenient no-charge access to a wide variety of classes, many of which (like the courses I’m taking now) have been put together by people who are leaders and innovators within their fields. I’d even consider paying for at least some of these courses (at $49 each) in order to get a more formal “verified certificate” (as opposed to a “statement of accomplishment”, and may do so for later courses within this series—potentially good news for Coursera, which in the end is a profit-making enterprise.
However for people who are not Internet-savvy, not all that motivated, and don’t have the necessary background then MOOCs aren’t a good choice. In fact, they’re about the worse choice there is. The dropout rates in MOOCs are extremely high (well above 90% in many cases), and the first serious test of MOOCs as a replacement for in-person college courses (at San Jose State University) was not a raging success. Which is not to say that online learning in general is doomed; in its more traditional forms (for example, University of Maryland University College) it’s doing quite fine.
MOOCs are simply the latest in a long line of attempts to move away from the traditional classroom model and “disrupt” the existing educational establishment. They’ll eventually find a place in the overall educational picture, most likely serving a variety of needs from “learning as hobby” (what I’m doing), high-end vocational education (what Coursera competitor Udacity seems to be morphing into), or as a supplement to traditional classes. But that’s for the future, and no real concern of mine; in the meantime I’m just trying to learn how to plot in R.
Summer is over, and it’s time to really get to work on my Sabbatical project.
I did do some work this summer – I’ve read 3 of the books on my Sabbatical Reading List (and added a few more to the list) and I’ve finally de-lurked on the OpenMRS developer mailing list and in some of the online meetings, and I’ve made a decision to convert all of my course materials to Markdown (the better to track changes on GitHub – see a future post). But, it’s all been pretty passive.
So, the Friday before the Labor Day Weekend, I decided it was time to get back to the “develop code in OpenMRS” part of the project.
Since it had been a couple of months since I had set up my development environment and tried to build the OpenMRS code, I decided that starting over from scratch (mostly) would not be a bad idea. Here is what I did:
cd openmrs-core maven clean install
cd webapp mvn jetty:run
Now that I have a working environment that builds and runs, the next step is to choose a ticket to work on.
As we’ve mentioned before, OSS Watch is involved in the VALS Semester of Code project, enabling university students to participate in open source projects as part of their degree course. There’s a week left for FOSS projects to sign up and submit their mentored projects for the pilot, before the students can sign up and submit their proposals. As reported on the Semester of Code website, there’s been a fantastic response so far:
Organisations that Semester of Code students will have the opportunity to work with include the NASA Jet Propulsion Laboratory, Moodle, OpenMandriva and Lime Survey. The project ideas submitted so far cover topics such as audio processing, HTML parsing, API design, web development and many more.
You can read the full announcement on the Semster of Code site.
I’m going to gradually share my dissertation proposal here as I migrate it to github and into LaTeX – I’d love reactions, comments, questions, etc! The title is “a poststructural perspective on engineering and technology faculty as learners,” which will hopefully make sense after the first 2 paragraphs.
Faculty are learners, too.
We often think of engineering and technology faculty as teachers. As facilitators of student learning, they hold certain philosophies about teaching and learning that shape their interactions in the classroom. However, faculty are also learners themselves, students of the practice of “teaching engineering and technology.” As engineering education researchers concerned with research-to-practice transfer, the ways we think about faculty-as-learners likewise shape the ways we try to impact their practice. Making this thinking more visible within the engineering/technology domain is of interest to engineering and technology faculty members, faculty development professionals, and adult learning researchers.
My dissertation draws on the traditions of cognitive apprenticeship and narrative within engineering education research to explore poststructuralism as one possible perspective on faculty-as-learners. Poststructuralism is a paradigm that constantly seeks a “making-strange” and unsettling of habitual narratives. A poststructural view challenges us to remain in the discomfort of liminal (in-between) spaces where everything is constantly troubled and nothing ever really settles. This study demonstrates a concrete method for engaging faculty members in that liminal space. Through it, I anticipate contributing to our ability to articulate and value faculty explorations in chaotic territories such as large-scale curriculum redesigns, new program formation, and other places where valuable growth occurs but is rarely put into words.
How can we think of faculty as learners?
I will briefly describe several qualities of faculty-as-learners, compare and contrast them to the qualities presented in existing literature, and explore why it may be valuable to make these qualities visible. Note that by comparing and contrasting my approach to other studies, I am not making statements about the relative quality or validity of their work. I am also not saying that these ways of viewing faculty-as-learners are not present in any existing literature. I am simply using differences and similarities to more clearly articulate certain assumptions about faculty-as-learners that are present in this project.
They are situated in the community of practice of teaching their discipline.
The first quality of faculty-as-learners is that they are situated in a community of practice (Wenger, 1999), that of teaching their discipline. By making sense of the activities of other practitioners around them, they develop the skill of reflection-in-action (Scho¨n, 1983) and use this metacognition and self-monitoring to improve their own practice. Existing engineering education change initiatives have used this cognitive apprenticeship (Collins, Brown, & Newman, 1987; Collins, Brown, & Holum, 1991) approach to build Faculty Learning Communities (Cox, 2004) and other communities of practice to encourage rigorous research in engineering education (Streveler, Smith, & Miller, 2005). Understanding faculty as situated and communal learners helps explain the limited success of an “information dissemination” approach (Siddiqui and Adams, 2013) to research-to-practice transfer. If faculty change their teaching practice primarily in response to direct interactions with colleagues (Fincher, Richards, Finlay, Sharp, & Falconer, 2012) and not print materials, it’s no surprise that journal papers continue to go unread (Borrego, Froyd, & Hall, 2010).
They are adult learners.
The second quality of faculty-as-learners is that they are adult learners. They have rich histories of experience to draw upon as well as expectations of agency (Vella, 1997). This quality allows us to portray faculty as narrators, highly capable and interdependent agents who both read and co-author the stories of “how things are done” within their culture. The Disciplinary Commons initiative (Tenenberg & Fincher, 2007), which scaffolds faculty through a dialogic process of creating teaching portfolios for their existing courses, is an exemplar of validating faculty as adult learners. Initiatives that treat faculty as mere empty “buckets” to be filled with “information about teaching,” such as information sessions dedicated to lecturing faculty about why they should not lecture, may have a limited impact because of this epistemological disjoint.
They learn by engaging in liminal experiences.
The third quality of faculty-as-learners is that they often learn by engaging in liminal experiences where their activities do not fall into a cleanly articulable structure, and can therefore be described as poststructural. In fact, the liminal experiences of faculty are often explicitly about dismantling old structures and remaking new ones, such as the founding of a new college (S. Kerns, Miller, & D. Kerns, 2005) or degree program (Katehi et. al., 2004), dramatic overhauls of an existing curriculum (Mentkowski, 2000), or the creation of experimental classes.
The trouble with liminal experiences is that they stand outside the realm of structure, including the structure of validation and reward. Our existing academic system rewards faculty for engaging in liminal spaces for their research, so long as they are able to publish papers about it afterwards. However, the examples listed all impact teaching, a form of scholarship that is underdeveloped and unrecognized (Shulman, 1998), with a correspondingly slow speed of change. Mann’s (1918) calls for engineering educators to improve student retention, make undergraduate workloads reasonable, and increase hands-on training still sound as relevant today as when he wrote his report nearly a century ago. Our calls for transforming engineering education (National Academy of Engineering, 2005; Institute of Medicine, National Academy of Sciences, & National Academy of Engineering, 2007; McKenna, 2011) will similarly go unheeded unless we find ways to understand and articulate the ways faculty learn and work in these liminal spaces.
By having engineering and technology faculty work with their stories of liminal experiences, this study addresses the following research question: How can the interacting narratives of engineering and technology faculty inform our understanding of faculty as learners?