The Professional Plan

By Michele H. Jackson. Nuevo Día 2004 Presentation

The Professional Plan

“Professional planning” is a phrase I settled upon as representing the process of setting out the professional activities for the intentionally-vague period between now and a number of years into the future. For many of you, “now” means either the last years of your graduate program, or the first years of your first position. And the only distinct landmark in “the future” is contract review, promotion, and/or tenure. Talking about a “professional plan” is a way of turning that planning into something explicit and concrete.
[Read more…]

Is that online student who they say they are? Expectations for verification of student identity

Let’s imagine Bill applies to attend your college or university. Here are some expectations that we could assume honest and reasonable people can agree on:

  1. Bill should actually exist and not be a fictional character.
  2. The person who shows up for courses saying he is Bill should actually be Bill, and not Tom.
  3. Completed assignments that Bill submits represent his actual work and not work completed by, say, Mary.

The typical instructor for the typical on-campus course pays the most attention to #3.  We check for plagiarism, we institute Honor Codes and Policies Against Academic Dishonesty, and we have students sit every other desk during exams. A much fewer number of instructors take steps to ensure #2.  They might check student ID cards before exams, much like the guard at airport security trained to match your face to whatever mug shot ended up on your drivers license. And I wager most of us have never even met an instructor who has had #1 cross their mind (although I did once have a student who I found out later was in a witness protection program and so he was, in a sense, fictitious).

But what if we switch it up and just talk about the online course? It suddenly becomes clear just how much we have relied on the personal relationships that come from same-time, same-place instruction. Can we really assume that Bill exists? that we haven’t been working in actuality with Tom? that Bill’s excellence in equations isn’t really Mary’s? Online courses don’t make people lose their scruples, but it is an environment where the fraudulent and the cheats have an easier time because instructors no longer have the additional protections of face-to-face contact.

The government recognizes this too. Part H of the The Higher Education Opportunity Act (2008) requires accreditation bodies to require that

an institution that offers distance education or correspondence education to have processes through which the institution establishes that the student who registers in a distance education or correspondence education course or program is the same student who participates in and completes the program and receives the academic credit.

The accreditation processes themselves spell out the requirements. For example, here is language for the most relevant point from the SACS-COC document Distance and Correspondence Education Policy Statement

At the time of review by the Commission, the institution demonstrates that the student who registers in a distance or correspondence education course or program is the same student who participates in and completes the course or program and receives the credit by verifying the identity of a student who participates in class or coursework by using, at the option of the institution, methods such as (1) a secure login and pass code, (2) proctored examinations, and (3) new or other technologies and practices that are effective in verifying student identification. [emphasis mine]

So, given that this requirement has existed for several years, you’d imagine that there would be some standard solutions, right? Yes and no.  Solutions? no. There are no foolproof solutions, especially for #2 and #3. Recent changes to government processes, such as for the FAFSA, have made #1 much more difficult. And most institutions have processes in place that can be modified to work for the online students–like requiring a social security card or a passport or a video interview. Of the dozens of institutions I researched for this essay, none expect course instructors themselves to verify that Bill exists.

So, Bill is admitted as a student.  What happens next is the single most universally recognized best practice:  A secure login and passcode managed through an “Identity Management System”, (commonly known as an IDM) along with an “Acceptable Use Policy” that tells students not to share their information. The government allows institutions to trust their students to not give out their login and pass code to others. Sanctions for violations are spelled out in an Acceptable Use Policy, as the institution sees fit. In best practice, rules and processes are documented, applied systematically for all students, and protect student privacy.  So, the instructor who uses the tools provided by the University (such as the learning management or the institutional email account) isn’t expected to worry about expectation #2.

Which leaves expectation #3.  Ensuring that Bill-the-online-student submits his own work is left to the instructor, just the same as it is for Bill-the-on-campus-student.  This is an area in which the government has not indicated minimum requirements or even standards by which to measure compliance. And nothing so far indicates that accreditation bodies expect institutions to make cheating impossible. But we aren’t supposed to turn a a blind eye to either.

So, even if the government doesn’t require proof, what are reasonable expectations we should have when evaluating online courses in terms of how they ensure academic integrity?

The short answer is: it’s much less about the technology and much more about the teaching. Here is where the eLearning community has developed some instructional best practices.  Several are aimed at reducing a student’s temptation to cheat. For example,

  • Make academic integrity part  of the culture.  Communicate to students regularly about it and in many forms, such as in orientation materials, on syllabi, and within assignments.
  • Create assignments that make plagiarism difficult.  For example, requiring several drafts of a paper instead of one final paper makes it more difficult for a student to use online paper mills.
  • Assign multimedia projects that show the person behind the name with a student’s voice or image.
  • Use “authentic assessment” activities and assignments that require active student engagement, such as journal, group projects, portfolios, and debates.

If a traditional “pencil and paper” type exam is needed, explore if your learning management system (LMS) offers options for randomizing exam questions. Set a limited window for exam completion.  Use software or an online tool that simulates a proctored environment.  Or, if necessary, require the students to find a qualified approved proctor in their locale.

An institution can offer these suggestions to instructors through workshops, training, instructional design assistance and other resources that support online course design and development. IT can help by testing tools and applications and recommending solutions that are both effective and reliable. And eLearning staff can help instructors to imagine different pedagogical strategies that are most likely to engage students and improve their learning experience.

Learn More

  1. Consortium of College Testing Centers
  2. Higher Learning Commission, 2012, “Practices for Verification of Student Identity
  3. Johnson, Lisa Marie. 2012. “Proactive strategies to promote academic integrity.”
  4. Middle States Commission on Higher Education, 2015. “Verification of Compliance with Accreditation-Relevant Federal Regulations
  5. US Department of Education, 2011. GEN-11-17. “Fraud in Postsecondary Distance Education Programs – URGENT CALL TO ACTION”
  6. WCET, 2008, “Are Your Online Students Really the Ones Registered for the Course? Student Authentication Requirements for Distance Education Providers”

5 things you should know about online courses and federal Credit Hour regulations

Here is a question that is likely to come up in a variety of conversations about online learning: are the requirements of this course equivalent to what we expect in a face-to-face course? When administrators are at the table, the conversation will likely turn to federal regulations concerning the Credit Hour. Yes, there is actual federal regulatory language defining a credit hour for federal programs (34 CFR 600.2):

An amount of work represented in intended learning outcomes and verified by evidence of student achievement that is an institutionally established equivalency that reasonably approximates not less than:

1. One hour of classroom or direct faculty instruction and a minimum of two hours of out-of-class student work each week for approximately fifteen weeks for one semester or trimester hour of credit, or ten to twelve weeks for one quarter hour of credit, or the equivalent amount of work over a different amount of time; or
2. At least an equivalent amount of work as required in paragraph (1) of this definition for other academic activities as established by the institution, including laboratory work, internships, practica, studio work, and other academic work leading to the award of credit hours.

So, here are 5 things you should know about this area and complying with the regulations:

1. Your regional accreditation body will have its own statement of its expectations for how institutions within its jurisdiction should comply. The regulations charge accreditation bodies with enforcement. The regulations say Accreditors

• Must review the institution’s policies and procedures for determining the credit hours and the application of the institution’s policies and procedures to its programs and coursework;
• Must make a reasonable determination of whether the institution’s assignment of credit hours conforms to commonly accepted practice in higher education;
• May review and evaluate an institution’s policies and procedures for determining credit hour assignments through use of sampling or other methods in the evaluation;…

2. In my experience, faculty (who are accustomed to a fair amount of autonomy regarding their courses and course design) will want to debate the fundamental merits of the Credit Hour. Such a debate is, for all practical purposes, a moot point. Compliance is required in order to receive Title IV funding and other federal financial assistance. But here is something interesting: faculty will assume that classroom-delivered courses are the baseline. Government documents, however, explicitly state that the definition of a credit hour neither requires nor implies any minimum amount of seat time (even for classroom-delivered courses). In other words, the standards don’t privilege any type of course over another. There is no implied instructional gold standard.

3. The regulations specify a minimum standard. An institution can exceed the minimum expectations for the definition of an institutional credit hour. This is where it is unclear if institutions can impose greater expectations upon one type of course over another. In my opinion, in the long run, it is better for institutions to develop standards that are agnostic in terms of delivery modality. If the faculty are concerned that online learning is “less than” placed-based learning, attempt to broaden the discussion so that reasonable people would agree to apply them to any course.

4. If your institution wants to use paragraph 1 as the main criterion for assessment, notice that the regulations don’t dictate particular amounts of classroom time (i.e. seat time) versus out-of-class student work (i.e., homework). Instead, the emphasis is the amount of time the student spends “academically engaged” (the government’s phrase). For classroom-delivered courses, the regulations allow us to give the benefit of the doubt to classroom time as academically engaging (although that can be debatable), accounting for 1/3 of the roughly 45 hours of student engaged activity expected for a credit. But this is only a convention! For any course (but particularly relevant to online-delivered courses), the breakdown for course design doesn’t need to follow the conventional ratio of 1/3 instruction to 2/3 homework. Engaged time doesn’t need to be spread evenly throughout the term, either.

5. My guess is that there are only certain types of institutions that will rely on paragraph 2 for compliance. Defining “student achievement” is not just a can of worms…it can be a fully loaded bear trap or swarming bee hive. Most faculty never get the opportunity to work with an instructional designer, which means most never have a discussion about how to craft effective learning objectives. Which is too bad, for a lot of reasons. But one of those reasons, is that it makes it unlikely that faculty will learn how to talk professionally and nondefensively about how their course results in student learning. Which means they be less likely to tackle the larger and more interesting issue of the relationship that we expect between instruction and student learning. As one of the documents states “at its most basic, a credit hour is a proxy measure of the quantity of student learning.” When an institution increases its use of online learning, an opportunity arises to encourage conversation, reflection, and improvement of the educational experience.

References and further reading:

DCL ID: GEN-11-06 Subject: Guidance to Institutions and Accrediting Agencies Regarding a Credit Hour as Defined in the Final Regulations Published on October 29, 2010.

DOE. Program Integrity Questions and Answers – Credit Hour

SACS-COC Credit Hours Policy Statement.

Logistics and/of Communication

radar1A few years ago, I picked up a paper at a conference poster session written Judd Case, from Manchester College. He was making an argument for radar as a branch of communication theory.  One of his observations really resonated with me – the attention to the concept of logistics.  Drawing on the work of earlier social scientists such as Harold Innis, Lewis Mumford, and Marshall McCluhan and the concept of technic as well as Virilio’s observation of the camera ensuring “logistics of perception,” Case argues that radar is a logistic because it first orders and then represents.  Thus, he argues, there is also a politics of logistics.

If logistics is about ordering and coordinating first, then there is a clear connection to what I call configuration, or the ordering of what exists into something new. Such as a mashup. Logistics seems to go beyond just coordinating or configuring.  It implies an ordering, a perception of relations or how thing (should) relate. Therefore, meaning is strategically built into the structure.  Which then itself structures further action.

Logistics may indeed be different than structuring and ordering by offering a stronger connection to materiality.  In the event of logistical failure some part of the system — some material mechanism — literally breaks. Logistics imply terms such as system capacity, throughput, movement. So failure creates a standstill until the part is fixed or a work around is built.

Case argues there is a politics of logistics. And I think, certainly, interesting potential for new ways of theorizing the politics of systems and of technology.

For an abstract of the dissertation that was the source of this paper, see

Attaining and Sustaining a Dynamic Learning Environment

Stop for a moment and think about this: on any college or university campus, members of at least 3 generations are collectively working together to engage in learning, creativity, and discovery. Too often, we begin any description of higher education with the roles of student and teacher.  Better to think that we are a multi-generational community with a shared objective–hopefully, a shared passion–for learning. Passion both for our own learning, and for supporting a dynamic learning environment.

If we start from this alternate place, this shared objective and this diverse learning community, what would we want to emphasize?  How would we build and sustain a dynamic learning environment? I think we focus on these four things:

  • Personal relationships and connections
  • Intellectual fulfillment and growth
  • Uniqueness of experience
  • Shared Identity

Personal relationships and connections. A dynamic learning environment is interactive, with frequent and high quality interactions among students, faculty, staff, and friends. Of course, many of these interactions are conversations that we probably wouldn’t find anywhere else: engaging and exploring ideas and perspectives in a deep, critical, and thoughtful way. Yet there should also be many more: internships to connect students with alumni and friends, work-study opportunities to connect students with staff, living and learning communities to forge life-long personal bonds with peers, and “applied” instructional experiences outside of the traditional classroom where faculty and students can work closely together to conduct research, teach or tutor, develop creative work, or serve the campus or community.

Intellectual fulfillment and growth. A dynamic learning environment is flexible, where instructional activities and initiatives align and center on the learner. The conventions for delivering higher education instruction are so well assumed that it can be easy for faculty to think of instruction in terms of what happens in their own courses or labs. In a dynamic learning environment, in contrast, instruction is aligned systematically throughout the entire student experience. Units coordinate their curriculum both internally and with other units, instructional innovations are welcomed and studied for effectiveness, and student advising is centered on the whole student rather than on a transcript. Beyond the classroom, institutional resources are also learner-centered–websites, for example, become tools for enrolled students instead of advertising to prospective students.

Uniqueness of experience. A dynamic learning environment is emergent, where the unexpected and unanticipated is welcome. Interdisciplinary activity is encouraged, seeding conversations, courses, events, research, and degrees. Nontraditional students find their experiences and backgrounds appreciated. The institution invests in infrastructure that can capitalize on real-time events, such as funding opportunities to responses to current events, or technology infrastructure that connects campus centers to the world.

Shared Identity. Finally, a dynamic learning environment is vibrant and engaged, where all campus constituents perceive and express a personal connection to the institution. The campus offers multiple shared spaces that are well-designed to be welcoming and so are filled with constant activity from students, faculty, staff, and friends. These constituents share similar understandings of the values and characteristics of institution and, importantly, they see in it a place for themselves now and in the future.

Let’s concentrate on building and sustaining this environment, rather than on one that is careful and planned. We may be less able to predict outcomes, but we’ll be more able to achieve our central mission of learning, creativity, and discovery.

MOOCs, participation, and the Long Tail

long tailStudent engagement is not only a key goal of teaching, in higher ed it can often be damned hard to achieve. Especially when it comes to large classes. I’ve done work on how to increase engagement in large classes (and personally attempted some of the strategies), and I know that engagement is a big driver behind problem based learning and also clicker use in large classes.

MOOCs, as the best example of the classroom “super-size me”, were not motivated by the goal of increasing student participation. But that doesn’t mean people haven’t taken on this challenge. In fact, for folks interested in connectivist learning, MOOCs may offer a unique opportunity for  supporting autonomous social-networked learning. Rechristened cMOOCs, these are learning communities that are learner centric rather than professor centric, full of the buzzing and blooming of self-directed, personal learning.

One important question, then, is what does this look like? And here is where I have some concerns about mistaking activity for community. In a presentation at the Education 2014 Conference, folks from Ohio State presented their experience with a MOOC on english writing. They related the ways in which they saw students “hacking” the course, creating their own learning objectives and their own content. They invoked the trope of the “cathedral and the bazaar” to draw attention to the bottom-up (rather than top-down) nature of the course.

I appreciated the enthusiasm of the presenters, and their endorsement and encouragement of student-centered learning. Yet the evidence was not compelling to me for one reason: the long tail.  Let’s assume that student engagement is a variable and that every student begins at a certain default level of engagement (i.e., some students might be typically unengaged, others moderately so, and others highly engaged).  We can assume  that the distribution for this characteristics across a population of students is normal, or even that it is skewed so that most students tend toward disengagement.  If we have a large sample–for example, the 18,000 people enrolled in this MOOC–there will be some number who will have high engagement, regardless of how we structure the course.  This is the long tail.

These instructors certainly validated and supported engaged activity, but without data about the distribution of student engagement, it is difficult to support any claims that the pedagogy was a key factor in creating this engagement. In a course of 18,000 students, if only 1% were engaged, that is a very robust community of 180 people.  Indeed, 1/3 of 1% would still be 60 people, certainly enough to present a culture of engagement.  This hides the massive number of students at other points in the distribution. In the worst case scenario, the instruction can create a more severe skew, with more “moderately” engaged students moving to be less engaged (through anonymity and social loafing effects).

In response, I think it is critical that MOOCs be studied from a social perspective. In other words, to see them as communities that evolve over time.  A top-down organizational approach is nearly impossible at this scale (although many MOOCs have taken this approach and end up facing significant challenges).  Researchers should investigate questions such as:  What are the norms and rules for social behavior and how do they develop?  Where and how do the social networks form? What is the culture of the community and how is it enforced?  What are the diversity of roles available for members of the community and how are they formed and negotiated? Answers to these questions can help inform how to develop, nurture, and sustain a more participatory large-scale course environment.

When Cathedrals Become Bazaars: Notions of Community in an Open Course, Kaitlin Clinnin, Thomas Evans, Evonne Kay Halasek, Ben McCorkle. Educause 2014 Annual Conference.

It’s not easy being an open scholar

It’s not easy being an open scholar. Well, technically, it’s much easier than the traditional route. There are several thousand open access journals, which means a good number are being published in every field. And an increasing number of universities are hosting open repositories for their faculty’s scholarly work.

[Read more…]

Research directions for interorganizational collaboration

As part of a project on collaboration in the Internet Engineering Task Force (IETF) with my colleague Natalie Nelson-Marsh, I’ve been reviewing the research literature on interorganizational collaborations (IOCs). These organizations occur across sectors and come in various structures and sizes, having in common that they form mostly due to contingencies, such as unexpected events or a complicated or complex situation that can’t be addressed by a single organization.

Most of the early literature in the 1980s and 1990s focused on two major issues. First, identifying the specifications of an IOC in terms of its defining structures and processes. Second, the conditions under which IOCs tended to form. Not surprisingly, research investigated the inputs/conditions and outputs of IOCs, notably research into membership and shared purpose. Later work extended this research by turning to empirical study of IOCs themselves and identifying communication elements or processes that common to different IOCs.  Heath (2007), for example studies the significance of the “dialogic moment”. Hardy, Lawrence and Grant (2005) identify key discursive forms that structure a collaborative.  And Koschmann (2007) explores how an IOC can sustain its membership through collective identity.

As noted by these and every other researcher in this area, IOCs are key to organizing in the 21st century. I think this is absolutely true. I can see the very recent and dramatic rise in crowdfunded, crowdsourced, Just-In-Time business ventures as yet another contribution to this phenomenon (think Kickstarter). And so what I would hate to have happen is that we as researchers sediment or solidify too early what we understand to be IOCs. In other words, to settle on the construct too quickly. There are some assertions made in the early literature that run the risk of being taken now as assumptions, and now is the time to ask some more systematic questions. Here are some of the questions I think would be valuable for research to explore:

  • What is the conceptual relationship between collaborations as organizational forms (I’ll call “collaboratives” for clarity) and collaboration as an interactional process? Must collaboratives be governed collaboratively?
  • What is the role of self-reflection as being part of the organization? Must IOCs be intentional?
  • Is there an element of stewardship in the member role, either stewardship for the outcome or for the organization itself?
  • What are the individual strategic actions needed to maintain the organization as a collaborative?
  • What is the role of power and politics in collaboratives? Current research often holds that collaboratives are nonhierarchical (or at least that power difference is controlled in some way, as by a facilitator). But if we assume power to be an inherent part of organizing, what is the nature of it in in self-governing collaboratives?

Most central to my interest is in better understanding the communicative processes that constitute IOCs.  To the extent that “collaboration” as a process should be understood to be characterized by possibility, uncertainty, and intentionality, how do the processes in IOCs emerge, how are they sustained, and what role do they play in the ability of the IOC to sustain itself as an organizational form?



Hardy, C., Lawrence, T., & Grant, D. (2005). Discourse and collaboration: The role of conversations and collective identity. Academy of Management Review, 30(1), 58–77. doi:10.2307/20159095

Heath, R. G. (2007). Rethinking community collaboration through a dialogic lens: Creativity, democracy, and diversity in community organizing. Management Communication Quarterly, 21(2), 145–171. doi:10.1177/0893318907306032

Koschmann, M. A. (2012). The Communicative Constitution of Collective Identity in Interorganizational Collaboration. Management Communication Quarterly, 27(1), 61–89. doi:10.1177/0893318912449314

How do you demonstrate your new app is compelling?

Start-ups aren’t just for technical folks anymore.  Entrepreneurs with good ideas come from all disciplines, and can partner with developers to turn their vision into a product or service.  Folks who are able to recognize and analyze communication processes and needs (hey, listen up, comm majors!) are in a great position to invent great apps.

Michael Cusumano gives some great advice to would-be entrepreneurs as well as to their potential funders and investors.  He lists 8 things that should be used to evaluate the strength of a start-up proposal:

  1. A Strong Management Team
  2. An Attractive Market
  3. A Compelling New Product or Service
  4. Strong Evidence of Customer Interest
  5. Overcoming the “Credibility Gap”
  6. Demonstrating Early Growth and Potential
  7. Flexibility in Strategy and Technology
  8. Potential for a Large Investor Payoff

This is a fairly wide ranging list, and each element is important. When I work with students–whether they are studying communication, computer science, or media design–I challenge them to think more deeply and strategically about two items on this list: developing something compelling, and providing evidence that people will want what you’re offering.

Many entrepreneurs struggle to articulate what is compelling about their idea. We all can have this problem — I mean, we’re just, you know, excited by it…and…it’s so obvious that others will feel the same way, right?  As Cusumano points out, though, the bottom line is that entrepreneurs need to show that people will (a) use their product or service and (for many ideas) (b) pay for it.  So, to say that an idea is compelling is not to say that the idea is great in some abstract way that we can wax on about.  Instead, it is to concretely demonstrate its value to a base of users.

A mistake some investors and funders make is to accept as evidence that there are already a group of beta users or testers and maybe plans for marketing.  Cusumano cautions against this, but doesn’t really offer much advice on what to do instead.  He does advise to have at least a prototype or a limited engagement of a service.

So how do you get out of this Catch 22? Entrepreneurs need investments to build to get the users, but they need users (it seems) to get the investment.

This is a fundamental question, of course: how do know which investments will be successful if you are looking at new ventures? As Cusumano’s list suggests, it’s possible to look closely at things like the business plan, the leadership team, the market and how well the new venture is positioned in it.

These other elements are important, to be sure.  But if the product is not compelling, even these will not be enough. So, what should we expect instead?

I advise designers to build their case from a social or communication perspective.  At the most fundamental level, show that people, regardless of the technology available to them, have a strong need for something or already have a strong interest in something. Be as specific as possible without (and this is most critical and the hardest part) slanting your observations toward what your new product provides.  The way you answer this question will set up the challenges you’ll have to address for growing your user base. The basic assumption operating in my advice (widely supported by research in many areas) is that most people (not the early adopters, not your friends and family) don’t go around looking for change.  We tend to stick with habits. We don’t like uncertainty.

If your idea lets people do something that they can already do reasonably well with other means of communicating (including using other products or services) concentrate on where is a low threshold for adoption.  Show how your idea will seem very similar to the user but will be valuable because it is more efficient, more accessible, less costly, more easily disseminated, etc.  I call this a converting idea. Think of Encarta which was successful for many years because it worked hard to look like an encyclopedia.

If your idea lets people do something different than what they can do now, it is better if what it does is let people do more of the same or what they do now “plus” something added on, rather than something completely new. Show how users can get all of the main functions they expect and they will be able to do something they couldn’t do otherwise. I call this an augmenting idea. Think of Wikipedia.  It didn’t do quite as good of a job as Encarta of looking like an encyclopedia, but it did offer very timely information on a much, much wider range of topics (those that might not have made the cut for Encarta).  It does well enough as a converting idea, but really shines as an augmenting idea (and lately has increased its focus back on converting–to better match our expectations of what an “encyclopedia” looks like).

The last type of idea is the transforming idea.  These are the ones that get designers most excited.  These will “change everything.” These will reinvent, reimagine, revision. And most of them will fail.   Those that don’t will need evangelists to change the way people think or other inducements (like organizational policies or new laws) that require them to change.  Remember the basic assumption: Most people, most of the time, avoid change.  If your idea requires the user to completely rethink the way they see something, most people won’t go to the effort.

Now, the nice thing about seeing all of these together is that there is a sweet spot to aim for.  Find the idea that can start as a converting idea, grow into an augmenting idea, and then finally prove itself to be a transforming idea.  Any fantastically successful idea will follow this path. Two examples:

  • facebook tapped into the social processes already adopted by college students, augmented that with novel ways of showing the information (but the information itself was still familiar), and finally has transformed our idea of what it means to be connected to one another.
  • Youtube was familiar to watching home movies or obscure cable channels, it just let us do that online. As content became more mainstream, it stayed familiar – like watching television.  Entrepreneurs saw the augmentation potential and began creating native content.  Now it has transformed our understanding of the broadcast landscape, with mainstream media integrating comments and feedback and youtube “programs” being produced for traditional television programming.

And this pattern is repeated over and over again among successful ventures. Compelling ideas tell a story that center on ordinary, typical, even mundane communication and social processes. They don’t need evangelists (though evangelists could help in accelerating change) and they don’t emphasize how cool the technology is (though indeed it might be).  Don’t worry though: because working and living with each other is still awfully complex, this still leaves lots of room for great ideas.

Cusumano, Michael. (2013, October). Technology strategy and management: Evaluating a startup venture. Communications of the ACM, p 26-28.

Mobile phones don’t replace telecenters

I would wager that most of us assume that the widespread adoption of mobile phones would have practically replaced the “old school” (i.e., 1990s) view of telecenters.  Sharing a computer in a village or neighborhood (or using a local cybercafé) would provide people, who otherwise couldn’t afford it, with access to the internet.  But, of course, with mobile phones we now have the internet in our pockets.

Turns out this hasn’t happened. What has happened tells us something interesting about communication.

An interesting report comes from Chris Coward, Principal Research Scientist and Director of the Technology & Social Change Group at the University of Washington Information School.  The group conducted surveyed 7,000 people across 5 low- and middle-income countries.  One of the findings of the report is that “public access venue” computer use generally has not declined.  Instead, the particular uses of each type of medium has organized into patterns.

Coward provides a list of the various purposes that a user might have for using either a computer or a mobile phone.  And although he doesn’t comment on the way these different uses cluster, I hope this is something the research group explores further.  What I see when I look at the results is that overwhelmingly, when people use the internet to seek out information–particularly authoritative and reliable information–the computer is part of the mix.  In doing “research for school or work”, for example, the computer is used nearly 100 percent of the time, and solely used over 50% of the time. The results are similar for “research health issues.” In contrast, when the use is relational and ephemeral, the mobile phone is the dominant mode.  For example, for “keep in touch with friends,” the phone is used 80-90% of the time. And nearly as much for “meet new people.”

Perhaps this is due to obvious affordances; for example, information is difficult to read on the small screens of cell phones. But perhaps there is more going on here, that has to do with the functions of differing processes of communication. It will be interesting to see this research unfold in the future.

Coward, Chris. 2014. “Global Computing: Private then Shared?”. Communications of the ACM, 57(8), p29-30