Jumping into video tutorials, exploring animation tools

I’m sold on the value of short (2-5 minute) videos and their ability to make information accessible.  Internet wandering learners are more likely to stay for the lesson when it engages them without them having to settle in and get comfortable.

So, I’m going to try my hand at some over the next few weeks. As I’m preparing topics and content (some foundational concepts in group communication and some commentary in comm & tech), I’m exploring different options for presentation and delivery.  Voice over PowerPoint is an obvious option, but I’d like to make it the fallback if all else fails.  Instead, I’m investigating several tools for video and animation, some of which my students have used in great ways.  I thought I’d share the list here.  Right now, it’s in alphabetical order.  I’ll come back in a while and re-order to show more of a ranking:






Real-time facial recognition and the end of obscurity

Google GlassObscurity, and our expectation of it, is a concept I haven’t thought about before.  We know that modern uses of technology bring a whole host of concerns about threats to privacy.  We also know that people can post comments or images online anonymously, and that they don’t always live up to our highest ideals for civic conversation (see, for example, the very recent reports of anonymous users posting horrible tweets and pins to Robin Williams’s daughter) .  Obscurity relates to these, but is a whole new angle.

We expect obscurity when we are in a crowd or wandering in a public place.  It’s not so much that we expect to be anonymous–though, technically, that is there too–as we expect to not be noticed.  We expect that pretty much everyone in this crowd or shared place –including ourselves — is happy to pass by everyone else without even registering who it is they pass.  Obscurity means that if I passed you on thecrowd street, I wouldn’t really notice it, and then someone were to stop me 10 feet later, I would be unable to even correctly guess the color of the shirt you are wearing. Google pops up these synonyms for obscurity: of the state of being unknown, inconspicuous, or unimportant. And that promise of obscurity is useful, it allows us to go to the grocery store, or the movies, or a concert, any of the other places of our daily lives without having to think about who might be recognizing us, or watching us.

When we are diligently (obsessively?) tagging ourselves and our friends in photos we post to facebook or twitter or google+ or instragram or whatever, and when governments and other institutions (like schools) create databases of photos, we are adding to a massive trove of information that experts are figuring out how to use in real-time facial recognition. Add this feature to Google Glass, and you can no longer expect obscurity, because the technology will have no problem registering and remembering and identifying every person it passes.  How will this change social expectations?  If Glass recognizes someone in a crowd before I do, will I be expected to stop and chat?  If Glass gives someone else information about me, can they pretend that we have met before, you know, at that BBQ at Doug and Shelia’s last summer?  This article raises the very really concerns for the lives of protected populations, such as victims of domestic abuse.

This is an interesting example of how technology brings to the surface expectations and assumptions we didn’t really know we had — or at least, that I hadn’t fully appreciated for myself.   I look forward to some comm research in this area.

Erica Klarreich, “Hello, My Name Is…: Facial Recognition and Privacy Concerns”, Communications of the ACM, August 2014, p. 17-19.

3D printing for a wearable Facebook?

Thought experiment: 3-D printing as information and communication technology

Recently, ASSETT invested in a 3-D printer. This device receives jobs from a computer, the same way as your typical laser or laser jet printer. But the jobs you send aren’t documents. Instead, they are instructions for exuding liquid plastic layer by layer into a 3 dimensional object. Other 3-D printers (3DP) can use other materials such as liquid metal or even granulated sugar.

Popular for some time in the corners of computer science focused on crafting, or “making.” There are special gatherings of these folks at events called “Maker Faires”, which brings together all things crafty. At my university, computer science professor and Maker Faire enthusiast Mike Eisenberg has enthused about 3-D printing for quite a while now.

We held off in ASSETT for a few years first, because the printers were very expensive and second, because we couldn’t really see many ways the technology could help A&S faculty improve teaching and learning. What a difference a few years makes — printer cost has decreased dramatically, and we have done our own investigation to discover courses that could be enriched by either faculty or students creating 3-dimensional objects. None of these were communication courses.

So, here is an interesting challenge for you: How (if at all) can 3-printers be a communication technology? Document printing replaced earlier means of putting information paper, and wide scale adoption altered communication and information flows and changed the way we both disseminate and store information. Printing ink on paper is easily a communication technology.

I don’t see 3-D printing replacing any current modes of communicating. This makes it more difficult to adopt, because we can’t just overlay our current perceptions and practices onto its use. Instead, 3DP will have to offer us something new–either by augmenting what we can do, or completely changing or transforming something in our worldview.

A fairly safe guess is that 3DP will change interaction and teamwork in design domains, or any domain that already has a building or making component. A little farther out there is a guess that it can augment relational communication, particularly at a distance. What if we develop material analogs to emoticons? Instead of sending someone a happy face, we send them to their desktop 3DP a command to print a smiley (or some new 3D equivalent)? OK, and here’s a stab at a transformational idea — what if we combine 3D printing with wearable computing, where we create custom components for devices or sensors that communicate with one another, and that aggregate that information into a dynamic set of data that in turn alters the information in the sensors as well as instructions for what to print. Like a wearable Facebook.

Keep an eye on this technology. Will be interesting to see what emerges.

Statutory damages and innovation

For the past few years, I’ve had the opportunity to work with the Media Informatics program at Linnaeus University in Vaxjo Sweden. I have been a guest instructor in a course called Social Media Ecology. There’s lots I enjoy about this opportunity, including the chance to learn about what my innovative friend Marcelo Milrad is working on. But perhaps the most enriching for me both personally and intellectually, is working with students from across the world who represent many different disciplines–most of them technical, such as computer science or instructional technology design.

In this course, what I bring to the table is (not surprisingly) a perspective that teaches students how to put communication first when conceiving and designing a social media application. This is challenging for them (and I think challenging for most people) because it is the technology that stands out, and they want to change the way people do things. A Herculean or, perhaps, Sisyphean task. More on that in another post.

What brought this to mind just a few minutes ago was an article I finished reading in the July 2013 issue of Communications of the ACM, by Pamela Samuelson. She is explaining why statutory damages are so chilling for innovation in new media. The laws that govern how intermediaries (what I think of as those whose business models depend on using content generated elsewhere) can operate are not only strict, but also highly punitive. One adverse decision can bankrupt a startup instantly. As I work with students, I realize that I need to be mindful of this — yes, intellectually and hypothetically, we can think of lots of cool and interesting applications that repurpose data (content), but building one might cost you a fortune rather than making you one.

Revisiting Flaming: Blurting

I took a trip down memory lane today, reading studies of flaming.  For those of you who started using the Internet only since about the turn of the millenium, flaming might not be a term you have heard much before (I know many of my students haven’t heard it), but it was a major research concern in the 1990s. Flaming is sort of like trolling. Wikipedia (in an unusually weak entry) disambiguates “flaming” to include Flaming (Internet) as “the act of posting deliberately hostile messages on the Internet used mainly by a troll.”  It’s important to remember that flaming was much more significant than this.

Flaming was part of the “dark side” of online communication, even before the Internet. It was seen as proof that using computer-based technology to communicate dehumanized us. Made us less empathetic. Some researchers said it was because we lost the elements of face-to-face communication.  Others said it was because we could be anonymous — in fact, a quite important stream of research (SIDE) is based on this assumption.

I first taught about flaming back before the commercial internet, and before the modern-day GUI mail client or web browser. Back then, flaming was a topic for teaching net etiquette, aka netiquette  (another term that’s not used much anymore). Some of what we taught can more properly be understood as socialization into the emerging norms of online communication. Things like DON’T TYPE IN ALL CAPS EVEN IF YOU THINK IT IS EASIER TO READ (my father’s personal favorite) BECAUSE PEOPLE WILL THINK YOU’RE YELLING. Or, don’t use a lot of exclamation points, even if you just want to add emphasis, because people think you’re angry!!!!!!!!!!!!!!!!!!!! We taught about using emoticons to convey emotions that couldn’t be conveyed in text (no kidding, these ubiquitous little beasties didn’t really enter general consciousness before the 1980s 🙂 😉 😛 ).

But I haven’t thought about flaming too much in the past decade, as online communication has been more deeply integrated into everyday communication, and the since the distinction between the online and offline self has faded. It came back to mind lately due to an article recently published in Communication Monographs, by Dale Hample, Adam S. Richards, and Christine Skubisz. The topic of the article is blurting.  Yes, you read that correctly.  Not typically a word you seen in scholarly research, but it is a very good onomatopoeia.  Blurting is the act of saying something without thinking and that, as soon as it leaves our lips, we wish we could take back.

The researchers asked what sorts of people were most prone to blurt, and this is what they found (yes, it’s from the abstract, but I swear I read the whole thing):

Blurters endorsed more messages overall [as appropriate] and rejected fewer because of harm to other or relationship; they saw interpersonal arguments in a less sophisticated way, and as less cooperative or civil, but more pointedly emphasized the utility, identity display, dominance, and play goals for arguing; blurters were higher in verbal aggressiveness, indirect interpersonal aggression, psychological reactance, sensation seeking, psychoticism, extraversion, and neuroticism; and they were lower in perspective-taking and lying. People were most likely to blurt when they believed they had high rights to speak in a situation, and were less likely when personal benefits and relational consequences were at issue, or when the situation made them apprehensive.

The worry that the internet would bring on hoardes of flamers never panned out. In fact, researchers had a hard time finding flames.  O’Sullivan and Flanigan  found less than 1% of all emails or discussion posts had objective characteristics we could identify as flaming. This led them to develop a contextual definition that identifies flames based on the perception of the communicators involved.  And though I generally agree with contextual approaches, this solution in this case left me unsatisfied.

So here’s what occurred to me as I read this article: what if flaming had less to do with the technology (or even context), and more to do with all of the psychological and cognitive traits identified by Hample and colleauges? What if flames were the online equivalent of blurting?  Writing or posting something that, in hindsight, we wished we wouldn’t’ve. Some of us do it because we read faster than we think, and our fingers are flying before our internal editing kicks in.  But what if the regular flamers–what we now call trolls–are simply blurters in a text based communication environment?

All of the consequences of blurting/flaming — flurting? — remain significant: relationships harmed, confidences broken, embarrassment, and so on. But we can look to the relationship between cognitive and communicative processes for understanding, rather than pinning it on the technology.

An important takeaway from this possibility: reinforcing that research should not be asking the Difference Question.  The DQ begins with a soft form of technological determinism — an assumption that the explanatory mechanisms for any observed difference is the technology.  In fact, such research sets up the study so that the point of contrast is online v. offline.

Flaming may have had little to nothing to do with the technology after all.  OMG WTF !!!  😀

References and for more information:

Hample, D., Richards, A. S., & Skubisz, C. (2013). Blurting. Communication Monographs, 80(4), 503-532. doi:10.1080/03637751.2013.830316

O’Sullivan, P. B., & Flanagin, A. J. (2003). Reconceptualizing “flaming” and other problematic messages. New Media and Society, 5(1), 69–94. doi:10.1177/1461444803005001908

Turnage, A. K. (2008). Email Flaming Behaviors and Organizational Conflict. Journal of Computer-Mediated Communication, 13(1), 43–59. Retrieved from http://dx.doi.org/10.1111/j.1083-6101.2007.00385.x

Tanis, M., & Postmes, T. (2003). Social Cues and Impression Formation in CMC. Journal of Communication, 53(4), 676–693. doi:10.1111/j.1460-2466.2003.tb02917.x

Professional Advice, circa 1993

My graduate program had a weekly “Research Lunch” and during my last year in the program, the organizers invited a few of us that had one foot out the door to give some advice to the newer students. I recently dug up my notes from this event, and I’m sharing them here.  Most of the advice is still relevant. Don’t know if that is a good thing or not…

[Read more…]

What Makes a Study Interesting?

Every once in a while, I go through my file cabinets (yes, I still have file cabinets), looking for things to purge or—if I’m lucky–for lost treasures.  I found a few today, and here is one of them.  It is a handout from a course on quantitative reasoning that I took in graduate school from M. Scott Poole in the early 1990s at the University of Minnesota. The title is “What makes a study interesting”.   As with most of Scott’s ideas, I think this remains relevant and good advice to new scholars (heck, to established scholars as well).  I’ve recreated it here exactly as given to me and my coursemates, with the exception of correcting a typo and the numbering of one list. [Read more…]

Teaching students how to think about and write about research

I teach at all levels of higher education — from first year college students to PhD students. No matter the level, one of the challenges is teaching students how to think about research differently. Many are used to thinking about research–published research, especially–just as something to understand and remember. Maybe to criticize or disagree with. But this misses one of the core elements that makes research exciting for those of us who do it for a living — which is that research is a conversation held among people who care about a subject, want to develop an argument or position that will help us understand it better, and want to discuss those positions with others. [Read more…]

Getting Big Data is no longer the issue. What is guiding our use of it?

On December 12, I had the pleasure of moderating and serving as a respondent on the panel, “Social Media Analytics: Making Sense and Businesses from Big Data” at Linnaeus University in Sweden. The panel was part of the LU Media Technology department’s “Social Media Week.” Panelists were Peter Bjellerup (Social Business Consultant, Global Centre of Competence, IBM), Ian Dunne (Expound Social Media), and Marc Jansen (Hochschule Ruhr West and Linnaeus University).

Listening to each speaker, there was no doubt in my mind that big data has arrived. Production or creation of data is no longer an issue. And even taking into consideration the real challenges laid out for us by Jansen (such as needing to develop new approaches to memory management and structuring multiple data formats), I don’t really think that technical issues are any more an insurmountable task. What I saw in these presentations was a clear turn in the tide, from data qua data, to the uses of that data. And particularly fascinating were the recurring references to “conversations” and “meanings.” Bjellerup’s central argument was for the importance of thinking about big data as a tool for creating conversations — for listening to and engaging with customers. The challenge that organizations now face is how to translate data into action: how to use it to make decisions, to create relationships. Dunne took this a step further in a way I think is important for communication scholars: to devise means to bring data to specific and particular embodied situations in order to shape and alter the conversation in the moment. In other words, to push the data to users in real-time according to context specific demands. I have talked about this elsewhere as a vision of using technology to augment communication – to remove constraints or create possibilities, but without fundamentally altering the nature of the relationship.

The overall theme brought home by the speakers is how do we move from data to action? I also heard a troubling assumption underlying each of the speakers’ comments–though I suspect they would take issue with me here. What I heard was an odd acceptance that data collection is so ubiquitous (and that we are so frequently complicitous in that collection) that the data can know us, can reveal us in quite intimate ways. Ways that we might not even be conscious of ourselves. In that way, the data are transparent–there is no interpretation or rationalization of what action is more important than another. The system just collects our actions. So, then, the question of decision making — of what to do with this data and how to make sense of it — becomes a critical social issue. The panelists argue their enterprises seek to “measure, process, explain the success of social media”. My question is: what are our criteria for success? Research questions of 20 years ago no longer are as relevant: devices are now highly diffused, data is collected across multiple contexts, most users provide data freely, and social media is an accepted forum for interaction and exchange. So now the questions are about content. Yesterday’s data was planned, principled, and structured. Today’s is both unstructured (unprincipled, “ambient” data) and decontextualized (data is captured but we don’t need to know how it will be used).

We are in a sea of data. If the current issue is how to create conversations from that data, to build relationships, then this is no longer properly a technical issue or even a business issue. This is a social and ethical issue. I am not referring to the standard concerns based on individual privacy. Although there are concerns about how the data will be used, surveillance is more relevant to decisions about gathering and storing information. I am talking about something different. The emerging connections of multiple sources flatten our experiences into a single data stream which can then be appropriated to alter the nature of our interactions with one another, our relationships, and our communication. The data about us in the system makes us familiar to others, sometimes in deeply personal ways (in a story that is quickly becoming famous, marketers can analyze data to find out your family planning strategies). We are immediately and intimately familiar to those we haven’t even met. We will need to discuss as a society and in our communities new questions: should there be requirements for creating a “relationship” with someone? Should users of data be required (or expected) to follow norms relating to social rules and roles? And for communication scholars: what will be the emerging patterns of interaction in these situations? What are the communication ethics of “personal information differentials”?

Humans have always had information about one another. And, as a result, we have developed expectations for how to use that information, for what is acceptable and what is not. Now we are entering into a new information era. Our traditional guidelines may no longer hold. We need new conversations.

Humans as Mashups or, The Crowdsourced Human

I have written before about the importance of mash-ups to the future of communication processes. A key to mash-ups is the use of data produced by various otherwise unconnected sources. At the time, I was thinking of how these various data would be used to create media products, like dynamically rendered websites or video mashups. But an article by J. Verini in Dec 2012 issue of Wired introduced me to Hatsune Miku, who I see as a human mashup.

Miku is an image, an animation, and a pop star in Japan. It is not unusual for companies to create personas to sell products, and that is why Miku exists. She was created in 2007 to sell a virtual voice program created by a company called Crypton. But timing and context were critical. Miku was created in Japan where, according to Verini, fan culture is a popular phenomenon and fan created content is ubiquitous. The content poured into a technological environment capable of monitoring, analyzing, filtering, and extracting themes to constitute and sustain a new human identity: Hatsune Miku.

Verini writes,

“Miku…is just unreal enough, it seems, to be relatable. At a fan convention, Condry [from MIT] told me, he asked some kids why this was. “They said, ‘We know she’s not a person. We like that she’s a machine. Those of us who are into this like dealing with machines more than with people.'”

This is one example of why constitution and configuration should be a new focus for communication studies. Traditional conceptions of communication (as connection or information) miss the point of what is happening here: design and dynamic organization. Miku is an open-source person. What might result if these same processes where used to create other identities?

Reference: Verini, J. (2012, Immaterial girl. Wired, 20, 146.