Log in

No account? Create an account

Previous Entry | Next Entry


...for I just had one of the Golden Moments (TM) when bits and pieces of ideas you've had floating about for a couple of months suddenly gel together in something which very well might be coherent. I glee. And, like a good graduate school geek, I long to run to my advisor(s) and go, "This thing! Look! It is a thing! And I think it works! And it could be Interesting For the Community at Large (TM)! You likes it, preeeeeeeecioussss...."

(At which point, said advisors either go, "Buh?", "Feh.", or "Oooh...nifty." I'm hoping for door #3, though I'm sure there'll be at least a bit of door #1.)


( 7 comments — Leave a comment )
Sep. 24th, 2004 10:59 am (UTC)
Oooh! Go go gadget brain gel! *g*
Sep. 24th, 2004 11:07 am (UTC)
Someone pass the mental floss...
Sep. 24th, 2004 11:25 am (UTC)
Hee hee - yeeeeeessssss.
Sep. 24th, 2004 11:45 am (UTC)
Can you describe your idea?
Sep. 24th, 2004 08:22 pm (UTC)
Previous work: Make model of language learning, use in population to account for language change.

Previous problems: Some arbitrary parameters in model, such as length of "critical period" in which children are sensitive to data and how much each piece of data alters the current hypothesis that the child has about the language with respect to some binary-valued parameter x. These were calibrated based on the language change data available.

Idea in a nutshell (a rather large nutshell): View child's initial hypothesis as a binomial distribution centered around some probability p, with some # of sentences in critical period "n". (Can get this data by surveying all known languages and finding out if they have value 1 or value 2 for parameter x. This then becomes probability p - the probability of any given language having value 1 for parameter x, for example.) This is the "a priori" probability distribution for parameter x that the child starts out with, aka "current belief".

Use Bayesian updating method from natural language processing stuff as a way to update "current belief", given each new piece of data - which is either an instance of value 1 or value 2. Doing cute math stuff makes the amount each datum affects the "old belief" dependent on n, the # of sentences in the "critical period" for parameter x. So, I can get rid of some of the arbitrary parameters from my previous model, use this method in my current model to say something about the critical period n for parameter x, and possibly apply this method to say something about the critical periods for parameters x1, x2, x3, ...., about which we already know things from child language acquisition experiments.

Yay, multi-discplinary ideas.
Sep. 24th, 2004 05:01 pm (UTC)
Sweeet! Let's hope for some of 1 (since then it's a new idea) and then lots of 3 (since it's a good idea). Share! I wanna know.

Shall we have another linguistics lesson? i'll be at faire on Sunday :-)
Sep. 24th, 2004 08:23 pm (UTC)
I'm afraid not this Sunday, but I'll be happy to blather at you about it any time I'm free. ;)
( 7 comments — Leave a comment )


Owl Side
Jalen Strix

Latest Month

May 2011


Powered by LiveJournal.com
Designed by Ideacodes