Networked Innovation: An Exploration of the Lazer-Friedman Model

Paulina M. Paiz
8 min readFeb 17, 2021

Introduction

The Lazer-Friedman Model helps to explain the role communication network structures take to promote or inhibit innovation in the context of complex problem-solving. According to this model, agents engage in parallel problem-solving by choosing to either exploit the solution of one of their connections or explore the solution landscape and develop one on their own. Lazer and Friedmans’ results suggest that network structures that are more efficient at disseminating information, such as small-worlds and fully-connected networks, will perform worse in the long run than structures that are not as connected, such as lattice networks because they will generate less innovative solutions.

It is often assumed that “High impact discoveries and inventions today rarely emerge from a solo scientist, but rather from complex networks of innovators working together in larger, more diverse, increasingly complex teams.” The Lazer-Friedman model shows that decreasing density or connectivity in a network actually preserves diversity in the system. This finding disproves the common intuition that facilitating the diffusion of information in a team will improve its long-term performance and increase its number of unique solutions. Similarly, Evans et al.’s results disprove the popular assumption that bigger is better when it comes to team size.

The empirical puzzle lies in the fact that if inefficiency in communication leads to a more innovative solution as Friedman and Lazer suggest, then one would expect large groups, which are “more likely to have coordination and communication issues,” to perform better and generate more unique solutions. While a ring network does generate more unique solutions than a small world network or a fully connected network, which suggests reducing group size is a good idea, changing the number of agents within the same network structure should reflect Evans et al’s claim that team size matters. In a NetLogo simulation I designed, I was not able to reproduce the author’s hypothesis that all else equal, changing the number of agents in the network alters the number of unique solutions produced.

As a second point, reading Evans publication made me reflect not only on the size of the network but also on the density. Two of the models Lazer and Friedman used in their research were the linear and the fully connected. The former had only six agents and the latter in the upper twenties. Empirically, the density of their respective network structures did not make sense because one would expect a team of only six agents to be fully connected- in terms of structural ties, each agent has distinct ties to all the other agents. Having a fully connected network of more than twenty team members did not seem practical either. In larger teams, the path length is much higher- most people do not have direct access to the whole graph. Instead, larger teams have hierarchies and bureaucracies that organize the flow of information.

Futhermore, the model is not accurately representing how people make decisions under uncertainty. The expected utility of emulating someone else’s answer that already has been proven to be a better solution is higher than the expected utility of exploring the ruggedness of the fitness landscape. This is because the better solution is almost certain to guarantee the agent a similar “pay-off” or score, assuming agents’ skills do not vary enough to prevent them from correctly replicating the solution. On the other hand, exploring might have a higher payoff but it also comes with higher uncertainty. Both exploring and exploiting present an opportunity for gains. Therefore, agents will be risk-averse for gains according to Prospect Theory.

Design Methodology

I explored the model using NetLogo in three main ways. First, I examined what would be the effect of changing the size of the network. Second, I added an attention parameter that would limit the number of agents a turtle could pay attention to per round in a fully connected network. Finally, I added a probability distribution that would more accurately reflect an agent’s preferences to exploit or explore under uncertainty.

To see if the size of the network influenced performance, I used behavior space to run small networks and large networks one hundred times each and then compared the results in terms of average score and the number of unique solutions. Because I also wanted to use the data of these trials to see if the effect of network size would change according to the level of efficiency of communication in the network, I compared small lattices to large lattices and small networks to large networks for a total of four-hundred runs (one-hundred rounds each). All of them had a N, K of 10 and 5 respectively. The small lattice had ten agents and a degree of connectivity equal to two, which I took from Lazer and Friedman. The small fully connected network had six turtles and a degree of connectedness equal to six. The large lattice network was composed of forty agents. The large fully connected network was composed of forty agents as well. Degree of connectivity was kept constant between the small and large lattices and of course, varied for the small and large fully-connected networks. I arbitrarily chose the sizes of the networks because Evan said: “You might ask what is large, and what is small…Well, the answer is that this relationship holds no matter where you cut the number: between one person and two, between ten and twenty, between 25 and 26.” My hypothesis for the first puzzle about network size was that the average score and the number of unique solutions were not going to change significantly because Evans et. al’s results were based on an analysis of scientific publications which was first, conducted much later than Lazer and Friedman’s experiment and second, more qualitative and text analysis than Computational Social Science.

I implemented the attention parameter on a fully connected network with ten agents by adding a few lines of code that would limit each agent’s visibility of other agents’ solutions. I then ran the model one-hundred times with the attention radius on set to four agents at a time and another one-hundred times with the attention radius off so that the agent looked at all of the other turtles. My hypothesis for the attention parameter was that because each agent would have less exposure to better solutions, there was a higher likelihood they would explore the solution landscape and keep diversity in the system.

I corrected for the way people make decisions under uncertainty by increasing the probability they would emulate the solution of an agent that had a better solution. This was done by modifying the flow control of the run-step block of code with an ‘if-else’ statement. If the turtle encountered another turtle with a better solution, it would exploit this solution with a probability of p=0.6 and explore with a probability of 1-p. My hypothesis was that because the agents were propelled to copy other solutions, there would be defacto centralization, the network would not perform as well and the number of unique solutions would be low.

On top of exploring the model, I also verified if the results of Lazer and Friedman upheld in my trials. Keeping network size, degree of connectivity, and the ruggedness of the fitness landscape the same, I observed the influence of communication structure on network performance by comparing the average score over time of lattice versus fully connected networks. I expected lattice networks to perform better in terms of average score and the number of unique solutions because they are more inefficient at disseminating information.

Results

Lazer and Friedman’s results upheld in my model. The average score of the lattice network was higher than the fully connected one when I used small networks. Even though it appears as if the results on the right do not concur with Lazer and Friedman’s paper they actually do because in the paper they mention that in short time scales, highly efficient networks will outperform less efficient networks.

My results for network size were varied: For lattice networks, more agents meant a higher average score but only for intermediate time frames. After that, networks with fewer agents got higher average scores, but it does not seem to be a statistically significant difference. For fully connected networks, large networks outperformed the smaller networks for short and intermediate time scales. The number of unique solutions’ slope of change did not differ greatly across all four cases.

Doing an analysis of overall performance, I noticed that the large networks consistently got higher average scores than the small networks.

The attention radius did not seem to have a considerable effect neither on the average score nor on the number of unique solutions:

Surprisingly, adding the probability condition did not influence the average score of the network. The corrected one performed better than the original by an average of only 0.1. Unfortunately, behavior space had issues with identifying the number of unique solutions but I would think the corrected one has a lower average because more agents are likely to emulate others’ solutions.

Discussion

Exploring and extending the model was insightful in various ways. For one, I noticed that the large networks consistently got higher average scores than the small networks. I went back to Lazer and Friedman’s paper to see if they had noted any of this. In their paper they mentioned in passing that “For a given structure, smaller populations perform worse than larger populations because they start from a smaller set of possible solutions.” At first sight, this seemed to still contradict Evans et. al’s findings; however, thinking about it conceptually, what is happening to communication structure in large networks is that people are pushed to coordinate more efficiently just because there are so many people. Massive hierarchies and bureaucracies not only limit information flow but act like hubs filtering who sees what and standardizing everything so that it can go faster through the timeline. This would agree with what Evans et. Al found that small groups do more disruptive work because they have less to lose and do not have to respond to anyone.

In the introduction, I mentioned why using linear or lattice structures for small networks and fully connected for large networks is not reasonable. Exploring the model supported this intuition. I, therefore, conclude that small networks studied in the context of problem-solving in teams should be fully connected and large networks should reflect hierarchy by having people at the top act as hubs filtering information.

Although it did not seem as if making the model account for how people make choices under uncertainty, it did reinforce the fact that exploration is not as attractive to agents as the Lazer and Friedman model assumes. A solution might be to give more ambiguous instructions to teams so that they have almost no other option but to explore. Another solution would be to think about how to make the expected value of exploring higher considering how risky the option is for agents.

Taking all of this into account, the Lazer-Friedman model will be more comprehensive and useful for identifying optimal conditions for innovation. Policy implications include promoting more funding for small-group projects and thinking about government investment in academic research as similar to venture capital- with small projects having large risk but also large, innovative rewards. Further research that seeks to extend the Lazer-Friedman model may find it interesting to examine how asymmetric information and different skills influence the network’s performance.

--

--