What can we learn about a Twitter-based class and its participants based on the node-level SNA measures such as in-degree and out-degree centrality?


#1

What can we learn about a Twitter-based class and its participants based on the node-level SNA measures such as in-degree and out-degree centrality?


#2

Centrality measures allow us to determine the most connected members in the class, and from there we are likely able to determine who is greatly influencing information flow, perhaps setting tone and sociability other conversational norms of discussions that take place.

In an xMOOC setting (although I think it would likely transfer to a Twitter-based class), we used centrality measures to help us figure out that the TA was too active, jumping into conversations too often, too early, and thus terminating potential dialogue, rather than giving learners a chance to contribute before an ‘authoritative’ response was given. So in that sense, centrality measures informed the instructional and communication strategy that we used, and allowed us to intervene with the TA to change her approach.

Similarly, centrality measures can allow us to identify ‘learner leaders’ and highly engaged learners as potential candidates for community TA roles in further iterations of a course.

In both of these cases, centrality measures were used in an exploratory manner, and we then went in and read the content of posts.


#3

@drewpaulin Nice contribution! Did you also used other centrality measures ? For example, betweenness to find the “brokers”? Or closeness to get a better idea on the reach individuals have?


#4

Thanks @katerinabohlec. Aside from in- and out-degree, we focused on macro, network level measures rather than on individual measures, although your suggestions are good ones to follow up on! We were aiming for a lightweight, quick exploratory analysis approach as the idea is to be able to provide quick turnaround, actionable feedback to course designers and instructors.

What was interesting was diameter and density, as they really clarified the nature of xMOOCs: the density was really low, and when you look at the attrition rate, and number of people who posted once then dropped out of the course discussion, or course altogether. The diameter was also fairly shallow, so it really confirmed that information was flowing primarily through central node of the TA. The fact that we had low density and small diameter is notable, and signals that we may want to treat (some) xMOOCs different than we would other networks, considering the high attrition rate and learners’ tendency to audit parts of the course selectively.

Moving over to a Twitter-based or open, connectivist course context, I would hope/expect to see larger values of diameter, which could suggest that information/contributions/discussion originating from within the ‘class’ is also reaching/resonating with people and communities far outside the core group of participants, which I would see as a positive sign for learning (or at least for people engaging in interesting conversation in the hope of learning…).


#5

Confession: these are just my impressionistic comments, rather than detailed evidence-based observations. They are also from the perspective of an SNA noob! Apologies!

In order to get a better picture of the class participants, I found the DrL layout most helpful, since it pushed out to the edges many of those with low centrality, whilst concentrating those with high centrality. This enables us to inspect more quickly types of activity which might set apart the two types of participant. The total degree (and node size) can then point us quickly to the ‘key players.’ A closer inspection reveals the majority to be people (mainly course participants?), but there are also non-human nodes like the course hashtag (#cck11feeds), @youtube and other applications like @symbalooedu (behind which of course might indeed be people). It’s interesting to speculate on the ‘role’ (and agency?) of the non-humans.

Flicking back and forth between indegree and outdegree views allows us to quickly distinguish between the transmitters, receivers and those with more balanced activity. (Knowing the roles of different individuals here would assist in making a more meaningful interpretation; who was/were the course facilitator/s) However, one might expect a course leader to have high outdegree, but why substantially higher indegree? Are participants asking questions, seeking approval, or is it simply an artefact of the environment i.e. RTing what the prof says? In addition to that, there’s perhaps an effect caused by the relation between the course leader and course hashtag which might need accommodating.
One interesting (impressionistic) detail was that it seems that people generally have a slightly higher outdegree (the nodes appear to be bigger when viewing outdegree). Does this mean the class is shouting out, but not listening? I think I’d want to drill down into that at the individual level.

Although mentioned under the network-level thread, could reciprocity also be considered a node-level attribute? Rather than considering reciprocity across the whole network, reciprocal exchanges become even more visible when selecting individual nodes. Are these the back and forth exchanges indicative of those more likely to engage in discussion, question what others are saying, seek clarification?


#6

Hi Ian,

I also found the DrL helpful in the context of my post above, it clearly indicated those who posted once to an introduction thread and were never heard from or interacted with again.

Course leaders are likely to have high in-degree because in-degree can be understood as a proxy for prestige or popularity within the network. People are mentioning, RTing, replying to posts. Even if course leads and big names tweet less, those tweets/nodes are amplified via RT and mentions, so you get leaders who have low out-degree (not too many contributions) and high in-degree (lots of attention from the network).

In a study (Gilbert & Paulin, 2015) on expertise and centrality in acadmeic conference Twitter networks and their propensity to support learning, we looked into whether the ‘experts’ of the conference had high levels of centrality, prestige, tweeted more or less than non-experts, and would be frequently mentioned in network tweets. We found that experts were likely to have high centrality vs non-experts, were retweeted significantly more frequently than non-experts, and mentioned significantly more than non-experts. Tweet frequency (we thought it would be lower than non-experts) was not associated with expertise, some tweeted a lot, others didn’t.

I also think it would be interesting to dig further into why outdegree is so much higher than indegree for most within this network; I imagine that it is the same for most networks as only a few people gain the popularity/prestige of lots of mentions and RTs across ‘discussions’ network wide, even though there may be good dialogue going on throughout the network between smaller in-groups.

On reciprocity at the node level - will let someone else speak to this, as it’s an interesting point and I honestly don’t know the answer off hand. I do know in M. Hawksey’s TAGSExplorer tool, there is a ranking of ‘top conversationalists’ in the network viz - I had always assumed (but honestly should look into it further to see if I’m correct) it was something similar to this: does a node have lots of two way ties?

Ref:
Gilbert, S., & Paulin, D. (2015, January). Tweet to learn: Expertise and centrality in conference Twitter networks. In System Sciences (HICSS), 2015 48th Hawaii International Conference on (pp. 1920-1929). IEEE. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=7070042


#7

Always good to have some weekend reading :wink: Thanks for the pointer to this paper and for the generous feedback Drew.

I wonder if the ‘RT’ can therefore serve as a metric in it’s own right (and whether the means to highlight that in some way on the visualisation might be useful), or actually, whether when attempting to interpret the underlying meanings, it simply generates noise?


#8

@IaninSheffield, good observation! Drl indeed is very effective in visualizing various Twitter networks as those tend to include different overlapping (or not) discussions/communities. But I also always check the modularity value to make sure that the network layout is not “artificially” pulling things apart.

In particular, higher values of modularity (>0.8) indicate clear divisions between ‘communities’. Low values of modularity, usually less than 0.5, suggest that ‘communities’ tend to overlap more (in other words, the network is more likely to consist of a core group of nodes).