thelingspace:

linguisten:

ichbinkahless:

Could you imagine if modern physicists treated Einstein and Newton like modern linguists treat Chomsky?

I love the social sciences. But social scientists really need to stop treating everything like it’s academic warfare and that anyone who’s wrong about anything is a “linguistic imperialist”.

To put your argument on its head: Imagine if modern linguist(ic)s indeed treated Chomsky’s hypotheses like they would be scrutinized in modern physics. Where are the clear protocols, experiments, observations, the reproducibility, in brief — empirical evidence? Einstein would not have gotten a Nobel Prize for his work had it not withstood peer scrutiny. Newtons findings are in accord with empirical data that could be gathered by other physicists in the same ways he did. That is not the case with Chomsky. 

I am sorry, but that has got nothing to do with science, be it “hard” or social. Incontrovertible axioms and dogmatic infallibility are more typical for religions. That would also account for the way Chomskyanists attack “heretics”.

I end up thinking about this sort of thing a lot. This is the big picture question behind a lot of what I’ve done in my adult life. I view myself as a scientist, and I came into linguistics from doing bioresearch – molecular biology stuff. And here’s where I always come back to: generative linguistics, based in some conception of a Universal Grammar, is a scientific enterprise. If I felt generative linguistics wasn’t a science, I’d have quit. We made an episode about this earlier in the year, but there’s some more stuff that I want to say around it, and this is a good time. And this’ll probably get long, but that’s how we do things here.

At the core of the generative linguistics project is the idea that the ability to use language is an innate property of the species. We just have some knowledge of how language can work that’s in our heads as babies. And we use that knowledge to acquire language ridiculously quickly: we know the sound system of our language from the time we’re 12 months old, we know a lot about syntax and interpretation by the time we’re 2-3 years old, we pick up words at an astonishing rate, and we can even tell apart different languages and sort their grammars in our brains as tiny little kids. Without some knowledge of the parameters of the system they’re learning, kids can’t work this out in the time frames that they do. If we say all we have is basic problem-solving strategies and statistical learning without any innate linguistic knowledge, then we can’t capture the real world data. For example, linguists have shown kids can’t learn the English stress system or even learn where word boundaries are in the time frame they acquire them without assuming there’s some base of knowledge underneath it.

So from our current state of knowledge of how language works, we believe we have a system of innate knowledge in our heads: of phonology, morphology, syntax, and semantics. The Universal Grammar project – the heart of generative linguistics – is really just an ongoing attempt to characterize that system, by looking at how languages behave, coming up with hypotheses about the principles and parameters that underlie the system, getting more data, and then adjusting our hypotheses until they fit better. That’s science.

And we’ve had a lot of changes in linguistics since the 1950s, just within the generative part. Syntax has moved from phrase-structure grammars and simpler transformational grammars to X’ Theory and Government and Binding to the Minimalist Programme and phase theory. Phonology has gone from SPE to autosegmental phonology and feature geometry to Optimality Theory. I don’t think anyone in Generative Land really believes we have it all solved yet, and we may blow things up again. I think we probably will (particularly with OT). But that’s also science – we discard stuff when it doesn’t work.

We’ve taken ideas from even well-known linguists and tossed them away when they clearly wouldn’t work. Just look at, say, Sympathy in OT, proposed by one of the people who came up with the theory to begin with. It was pretty clearly wrong, and we got rid of it. People are still toying with the systems, trying to see if we can make it work with the data we have already and are getting more of, and if it becomes clear that we can’t, we’ll try to find something that works better.

I don’t really think there are incontrovertible axioms in generative linguistics. Even things as famous as the Binding Principles have changed in their characterization as we’ve gotten more syntactic research done. It’s like any other science: you can tinker and propose small things at the edges without upsetting anyone. No one ever got mad at me for being like, hmmm, the Prosodic Transfer Hypothesis should also apply to comprehension, as well as production. It was a small-ish piece in second language acquisition. But you want to change a big piece in the field, a core idea like Binding or Feature Geometry? You better have the data to back it up. We try to fit it into the theory first, to see how far we can push the ideas we already have. But if the data doesn’t fit the hypotheses, then we move on.

Even super basic stuff, we should be questioning, as we learn more and get more sophisticated. My favourite class in grad school (and possibly ever – it’s a close run) was taught by a phonologist named Dan Silverman. This was the class: he had written a book about phonology arguing that, well, phonemes were not actually a thing. And his attitude was “here’s my argument. I researched it a ton. You guys are smart. COME AT ME.” And we did! I think we approached it with an open mind, and so did he, and it was awesome. I wasn’t entirely convinced, but man, I can see it. He had an argument for sure. It was so cool.

Two things there: it’s hard to imagine something more basic than phonemes in linguistics – that’s like week 2 of your intro course (and video #4 for us). And also, this happened at the linguistics department at McGill, which is about as generative a place as you could imagine. But we were still taking up this debate, and it ended up being really interesting. We should keep doing that.

Are we really doing the best job we can as scientists in linguistics? There’ve definitely been people who have been too dismissive of people working outside the generative framework, and that’s just unhelpful. We should be willing to work with anyone who’s working with us in good faith. But there’s more experimentation going on now, across discipline boundaries, which is really important. Like, there was just a workshop last month about doing syntactic parsing models with Minimalist Programme stuff, and that is sorely, sorely needed – I really want to see ways to reconcile top-down and bottom-up processing facts. We should keep integrating ways to work on learning new things.

But we shouldn’t think that the syntactic judgments of the past that were used for a lot of the theory are just wrong. A couple of recent papers by Jon Sprouse, Diogo Almeida, and Carson Schutze (for one of them) looked at what happens if you go through and take all the judgments from a syntax textbook or a random sample from a leading theoretical linguistics journal and then do experiments with people to see if they agree with the judgments. And… they do. The vast, vast majority of those judgments, which on the face don’t seem so experimentally valid, can be and have been backed up by experimentation.

As computers have gotten more powerful and we’ve gotten the ability to crunch more data and do more experiments, we have to look at whether our belief that things have to be in the grammar were just that we didn’t have the power before. But the thing is, we’re doing that, too – there’s a paper presented by Sprouse and colleagues at NELS this past month looking at that, and still finding that grammar is necessary.  But we should keep checking! If we can pare down what’s in UG, then we should.

Because ultimately, we want to get this right. We want to capture the entirety of the linguistic system. And there’s a ton of cool linguistics stuff outside of these questions that is super worth studying! Language variation and historical linguistics and language documentation and more, it’s all really interesting and vital research.

But when it comes down to it, the generative enterprise is a worthwhile one. The data that we have an innate linguistic system is, to me, very convincing, and so we should work out what all is going on in our heads. The only way to do that, and to feel confident about what we find, is science. So that’s precisely what we’re doing. ^_^

If linguistics wasn’t a science, I wouldn’t be doing it. 

This is not to say that things that are unscientific can’t be immensely valuable and meaningful – novels are maybe my favourite thing ever – but linguistics, particularly, benefits as much as ecosystems or subatomic particles from rigorous, creative scientific inquiry. The thing about linguistics that makes me love it, rather than find it mildly interesting, is that we can approach it scientifically, and we do. It’s all about looking at this really frankly amazing thing pretty much every human being can do, and trying to understand why. There is room within linguistics for philosophy and anthropology, and these approaches enrich the field as a whole and help us get a better grasp of what it is we need to be studying; but the part of linguistics that gets me the most excited, that makes me pour my enthusiasm and energy into things like a Master’s thesis or The Ling Space, is that exploring the human language faculty is like exploring the deep ocean. We’re barely starting to figure out how it works, and what’s in there, but we have a few solid ideas and they’re letting us look deeper and further. 

Does every linguist approach the field from a scientific standpoint? No, and it’s not necessarily crucial that they do, so long as their research is well-grounded and rigorous in its own way. But the scientific approach to linguistics is attested in a considerable body of widely varied research, with transparent methods and reproducible results, and I find that this is the data that’s the most exciting to learn about, since this is the data that tells us the most about this thing we all can do. 

People may have different opinions, and I know I can enjoy reading anecdotal language stuff too – it’s fun and it makes you think and can lead to great ideas. But the science of linguistics is what made me want to be a linguist, and it’s what keeps me involved even years after being out of school. There’s so much to be discovered, and so much to be communicated. I’m thrilled I get to be a part of that scientific journey. 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s