What we cannot learn by using SNA?
First let me thank you for the excellent work you have done on Netlytic and your generosity in sharing (including a paper you sent recently on request). Netlytic is fairly easy to use and I think can offer insights for research and practice on graphs of our social networks. I have spent much time in the last year, trying to use (social) network analysis on ‘found data’ from a Twitter hashtag and a FB group associated with a MOOC. I learned much, often about what I don’t know and also about what we can’t know. I do know that I wish I had collected the data at the time instead of retrospectively so my advice would be use Netlytic early and often.
But what can’t we know? We have all probably been dazzled by pretty network diagrams with brightly coloured clusters of nodes that are sometimes called sub-communities (seen lots from NodeXL in the last year). If we dig deeper, we might find out the name of the algorithm that performed the clustering but what does it mean? I have seen Martin Hawksey’s TAGs Explorer used effectively in course inclusion activities on a Twitter hashtag but it seems intuitive somehow.
Although Twitter data is fairly open, what about Facebook? Bernhardt Rieder explains the implications of changes in the Facebook API that meant that the FB social network diagrams that his FB app Netvizz could (help) to generate were no more http://thepoliticsofsystems.net/2015/01/the-end-of-netvizz/ So my initial goal of directly comparing social networking interaction on Twitter and Facebook proved elusive. I was keen to see how SNA could be used in conjunction with analysis of qualitative data in research I was doing with colleagues. The answer was there was no simple comparison of graphs but the combination of Netvizz and Gephi did help us to explore the somewhat inpenetrable FB group archive. FB loves the stream and casts a bit of shade over the archive.
Anyway back to the grindstone to complete changes to the paper (thanks reviewers) so we can publish what we have learned.
Thank you for your insights and for sharing your experience with some additional tools to collect and analyze social media data using SNA.
If anyone is interested, here is a list of other automated tools that you might find useful to support the collection and analysis of social media data:
What we can and cannot learn by using SNA is a very relevant and important question that is still rather underdeveloped in relation to learning research.
A general question that intrigues me has to do with the underpinning principles/theories by which SNA has been developed and how it is (currently) applied to learning research.
To me a lot of the indicators used within SNA are based on a ‘transactional’ paradigm.
SNA was developed to understand for example the flow of information or communication patterns across networks – i.e., more factual data. If person A passes something on to person B, traditional SNA assumes that person B has received it. However, when learning is concerned this assumption may not hold. First, the extent to which ‘whatever’ was passed on was received may be uncertain. Second, the contribution of information to the actual learning process of the receiver is uncertain. Hence, the network theory or operationalization of indicators behind the tests that researchers conduct may have different implications.
Does density in a communication network imply the same as density in a learning network? What does the shortest path mean in terms of “learning”? Are all SNA indicators by default useful indicators of learning?
I think that a more advanced theory of SNA is needed to guide studies on networked learning using SNA.
SNA is a very flexible method, but it requires a solid theoretical framework to enable interpretation of findings. In the absence of a solid theoretical framework of learning through networks, researchers rely on conceptualisation from related research domains, such as communication studies. Applying analysis techniques that reflect a different theoretical orientation, researchers risk type I and II errors.
I think we at least need to be aware of the kind of relational data we are gathering. Is data at tie level actually about learning, ie are we looking at learning relationships and how can we use this SNA to analyse learning in these networks?
And / or are we looking at communication patterns in a social network and use SNA to see how certain interaction patterns impact the learning potential of this network. In this case i think we need additional information about the learning process and are we looking at a multi-method approach.
Hi Maarten, Anatoliy et al. Great to have this hotseat – good to see the enactment of a learning network as we demonstrate here how the ‘latent tie structure’ of the hotseat brings people together around a common interest, and from there perhaps to build first weak ties and later strong ties. I’m also intrigued that I have co-authored with Maarten, and with Anatoliy, but we have not yet done one for all 3 of us – ‘network closure’ re the co-authoring relation is in our future!
In response to Maarten’s post – quite agree that what ‘learning’ means in an SNA context is up for grabs, but so too is ‘learning’ on its own. Is it ‘learning’ when we sit in class/watch a (educational) video/listen to a lecture. We need to appeal to the psychologists and philosophers to talk about inner learning. So, we are always looking at outward signs – improvement on test scores, following routines, behaving appropriately. I am leaving the classroom here and considering learning in general – I very much believe – as I’m sure most of you do – that there is a lot more tied up in ‘learning’ than the acquisition of educational facts. There are great papers about what schools do as far as teaching societal norms (e.g, Bordieu), and if we look at what people need to learn to work online, at a distance, and through computer media, we find a lot of learning about technology, socio-technical practices, social learning practices, language, participatory culture, etc. (for discussion of this, see Paulin & Haythornthwaite, 2016).
Re SNA then, what change in behaviour do we want to observe that is the outward sign on internal learning. Within the group, I think we can see behaviours such as adherence to norms, appropriate use of language, adoption of group specific shorthands (jargon, acronyms, etc.). If we are looking for ‘learning to learn in a social context’, we are looking for signs of attention to others – e.g., replies, interactivity (Sheizaf Rafaeli from the communication field would count an ‘interaction’ as A posts to B, B replies to A, and A re-replies to B), even argument is appropriate.
If we are looking for application of knowledge in a new context, perhaps we look for bringing new references into the discussion (posts that have (new) URLs), which by itself is not a SN aspect, but is if that new URL is in response to someone’s post. A new URL could be an elaboration, an added example, a clarification, in the context of learning.
If we are looking for application of what is being taught, I always think that the adoption of domain-specific vocabulary is a reasonable indicator. Terminology and jargon heavy fields require the adoption and use of new language (just think of all the SN terms we need to adopt and use is a specific way: degree, reach, path, triad; in-degree, out-degree, density; ERGM; and more. Network level definition and use of terms is – it seems to me – evidence of learning.
Long post, but it is a really important question and area to be explored, i.e., what is our operational definition of learning in an SNA context.
Ref: Paulin, D. & Haythornthwaite, C. (2016). Crowdsourcing the curriculum: Redefining practices through peer-generated approaches. The Information Society, 32(2), 130-142.
Thanks for this very useful and interesting article Caroline - it will be invaluable for a paper we are revising after review. In the paper, you discuss the affordances of the technologies that enable/inhibit learning. What we would question in our work is whether the affordances of Facebook, designed to generate content/clickbait for the purposes of selling advertising, are particularly suited to enabling learning. Facebook groups seem to be used increasingly by learners and teachers in cMOOCs and xMOOCs. The data that could enable SNA to help explore learner practices is not necessarily available as Facebook responds to privacy concerns.
That’s an interest perspective on affordances – considering how the intentions of the designer regarding profit making can work with or against user intentions. I know a lot of learning groups use Facebook to communicate. There is, however, one issue with a MOOC use of Facebook, particularly a cMOOC is how you connect between Facebook and outside technologies. When I view a site on my laptop, I’m outside Facebook; but when I do that on my phone, I’m ‘inside’ Facebook and have a really hard time finding out how to pass on a reference or site to someone not on Facebook. Thus, Facebook can impose a technical boundary as well as the social one (i.e., that it’s just easier to do correspondence all in one place).
From an SNA/MOOC/Facebook perspective then, one exploration might be how much in-Fbook posting happens vs out. What is the ‘reach’ of a message posted on Fbook vs another part of the MOOC platform (assuming a cMOOC on multiple technologies). If learning is associated with the ability to bring new resources to bear on a current issue, how difficult is it to bring new resources on board. And, is it one way only – easy to get into Fbook, but hard to get out. It may not be so much that you do a complex SN analysis on data, but to look at Fbook affordances (or whatever the opposite of affordance would be – dis-affordance?) from an SN perspective – reach, closure, network size, cliques (e.g., Fbook and non-Fbook users in a MOOC), brokers or structural holes (where is info not crossing technology platforms).
Thanks so much Caroline. The issue you identified of the ease of getting stuff into Facebook compared with getting it out chimes with what we have found and probably not surprising given Facebook’s raison d’etre. I can’t say too much more just now as I am desperately trying to work on the paper with co-authors so it can be published I have a paper in a symposium at Networked Learning that looks at cross-platform and disconnective practice issues in Social Networking Services, and if the paper looking at Facebook and Twitter use in a MOOC is accepted by then, I could share it with you. If you have any time, perhaps we could chat over a coffee at NLC2016.
I agree with @maarten comment about what we are measuring and what we claim this means. It is often assumed that when we measure information seeking or advice ties that the good that is exchanged is also processed by the recipient. But the results are often inconclusive (http://www.sciencedirect.com/science/article/pii/S0747563214005615)
I appreciate @Caroline dash at detailing what learning could be for different contexts. I think that is important and should be taken up in articles.
Very interesting posts, @maarten and @Caroline! Since your posts illustrate what I’ve learned in practice, I’d like to share those experiences here. In the first example, Drew Paulin and I explored centrality in conference Tweet networks. Drawing upon the idea of more knowledgeable others, we wanted to see if there was a connection between actors’ roles and their position in the network. Using SNA we were able to make that connection, but had to stop short of drawing any sort of conclusions about learning. While we know that interactions between more knowledge others and learners can facilitate learning, all we could say was that connections were made but not that learning was enabled through these connections. Caroline, I think this is why your suggestions (looking at the vocabulary and content used in the Tweets) are so important - they can allow us to make connections between relationships that might facilitate learning and evidence of learning.
As another example, Gruzd and Haythornthwaite conducted SNA on a Twitter-based community of practice. One of the conclusions was that the dense network structure and the inter-connections between community members with varying professional roles suggested potential for knowledge exchange and learning across professional boundaries. In interviews I’ve conducted with participants, I’ve found that not only do community members feel like this is the case, but that it’s one of the most valuable aspects of community membership to them.
In sum, I’ve observed just what Maarten posted - that SNA provides great factual data. When supplemented with qualitative data, we can see more about learning processes within the networks and even the value of these processes for learners within the network.
Gilbert, S. & Paulin, P. (2015). Tweet to learn: Expertise and centrality in conference Twitter networks. Proceedings of the 42nd Hawaii International Conference on System Science, 1920-1929. doi: 10.1109/HICSS.2015.231
Gruzd, A. & Haythornthwaite, C. Enabling community through social media. Journal of Medical Internet Research, 15(10). doi: 10.2196/jmir.2796
I agree with your suggestion about supplementing SNA data. I have been working on a multi-method approach (also to be used over time) where SNA, content analysis (CA) and contextual analysis (CxA) is integrated.
The idea is to combine data on ‘Who talks to whom?’-SNA, with “What are they talking about?”-CA/Artefacts, and “Why are they talking as they do”-CxA, Interviews, observations.
These methods are combined to triangulate and contextualise our findings and to stay close or connected to the first hand experiences of the participants engaged in networked learning.
Investigating patterns of interaction in networked learning and computer-supported collaborative learning: A role for Social Network Analysis
Maarten de Laat & Vic Lally & Lasse Lipponen & Robert-Jan Simons (2007).
I tend to align with Maarten and Caroline; that the presence of connections and transactions (viewed alone) offers no more than the potential for learning to take place. If no connections exist, thereby eliminating transfer, then learning simply doesn’t happen. [Maybe]1
In a sense, although we can drill down as far as individual exchanges, we’re largely still viewing from the structural level. We can ‘break open’ a tie to see what was exchanged by the tweet, but that doesn’t tell us anything about the changes which occur in either recipient or transmitter. Or perhaps it would be fairer to say, as Katerina pointed out, the results are inconclusive. As Caroline says, to examine learning we need to look for some external manifestation or proxy indicating a change in behaviour, knowledge, skill or understanding. Since these are effects occurring at an individual level, won’t they therefore be invisible to a network visualisation, in the same way that atomic structure is at too small a scale to be examined using visible light? I find my thoughts drawn back to the classroom; a place with which I am more familiar. If as researchers we wanted to examine the effects of an intervention which involved teacher-pupil and pupil-pupil interactions (say questioning techniques), then SNA might provide a helpful visualisation of the interactions and exchanges which take place. Being able to view who interacted with whom and how often would doubtless prove instructive for the teacher. This intervention however, like the majority in education, would doubtless be to improve pupil learning; but in order to establish whether learning had occurred, we need an additional metric or tool to probe for changes at the individual level.
1I wonder about the unknown unknowns. The lurkers who may form an invisible sub-stratum within a network and who (I assume) don’t show up on network visualisations. They are connected through following, either of others or of hashtags, but until they tweet, do not appear in the data. They may be learning vicariously from the activity and interactions of others in the network, yet are not contributing, so are beyond the scope of our instruments to detect them … let alone examine any changes they might undergo. I wonder if they’re the dark matter and energy of a learning network?
Good luck with the paper, @francesbell . It would be great to read it once it is published. There is definitely a need to better understand learning practices when multiple social media platforms are used at the same time.
Related to your work, we (@Caroline, @drewpaulin, Rafa Absar) recently examined how multiple social media platforms (Twitter, online forum, blogs and blog commenting) were used in two cMOOCs (including #cck11). We found that different platforms were used for different purposes, but most interestingly only a small portion of participants were active on two or more platforms in these classes. (The paper will appear in the Proceedings of the WWW16 Workshop on ‘Learning & Education with the Web of Data’ later this spring; but here’s an earlier poster based on this work: http://www.slideshare.net/primath/learning-analytics-45906041 ).
What I think would be really helpful in this line of research is to be able to go beyond the analysis of most active participants, but also examine lurking behaviour (who is reading and accessing what); unfortunately, such data are usually not ready available to researchers unless it is a closed system.
@katerinabohlec, thank you for sharing this paper with us. It’s an interesting study. It looks like it examines a potential relationship between the students’ grades and various SNA metrics based on two types of networks (“read” and “reply”).
I suspect that one of the possible reasons why results may be inconclusive here is because SNA measures based on data derived from online forums don’t usually capture those students who are doing well on their assignments but who are not active forum participants.
In a similar study with my colleagues at Dalhousie University, we are observing that SNA and SNA viz can be a great tool specifically for assessing class participation and engagement, but not necessarily final grades as some students chose not to participate in discussions but were still able to get good grades on their individual assignments.
@IaninSheffield, I absolutely agree with you here… But I also think that over time as lurkers read and learn, some of them will become more confident and comfortable with the learning environment and community, and possibly start sharing their own experiences and expertise and engaging others; thus, becoming more “visible” and “detectable” by SNA.
For example, a node that is becoming more “central” over time may represent such behaviour. Of course, because each class is unique in some way, we would need to collect more empirical evidence to match what SNA tells us with what actually happened.
Thanks @gruzd - the poster is fascinating. You identified the hidden nature of activity data on Social Networking Sites but another aspect that fascinates me is the dynamic and also hidden nature of the algorithms that control what we see and how. The Twitter of 2011 is different from today’s Twitter http://www.theverge.com/2016/2/10/10955602/twitter-algorithmic-timeline-best-tweets. This dynamic natore of SNS has implication for research and practice.
And thanks for the week’s dicussion. It has been enlightening for me.
I think that’s highly likely Anatoliy, so perhaps a way of monitoring that temporality is needed. Integrating the time dimension as part of the visualisation might enable those sorts of insights?