Рептилач > Стена

Рептилач
14:21:35 August 16, 2015

The neuro-revolution is coming: Greg Gage’s neuroscience kits put research in the hands of the curious!

How DIY neuroscience kits put research in the hands of the curious
How DIY neuroscience kits put research in the hands of the curious

Greg Gage is revolutionizing neuroscience education by making real equipment accessible to classrooms and home enthusiasts.

http://blog.ted.com/the-neuro-revolution-is-coming-greg-gages-neuroscience-kits-put-research-in-the-hands-of-the-curious/
Рептилач
04:53:38 June 24, 2015

Тест Тьюринга пройден

Рептилач
08:33:01 June 19, 2015

Жанна Фриске отдала свою жизнь чтобы уничтожить MDK.

Словно герой, легший грудью на гранату она легла своим мозгом на рассадник рака воимя великой цели: чтобы наши дети жили в мире без исчадия Панчвидзе.

Это самый героический поступок в истории человечества!

Рептилач
20:17:08 June 17, 2015

Getting Smarter.

A U of T computer scientist is helping to build a new generation of intelligent machines:

Geoffrey Hinton has a news bulletin for you: You’re not conscious.
OK, you’re conscious as opposed to being unconscious – such as when you fall asleep at night, or when you get knocked out during a boxing match or when a doctor administers a general anesthetic before surgery. But you don’t have some intangible mental quality that worms or daffodils – or toasters, for that matter – lack.
“Consciousness is a pre-scientific term,” says Hinton, as we sit in the lounge down the hall from his office in the department of computer science. (Actually, Hinton remains standing, explaining that it’s easier on his back; to show me something on his laptop, he kneels.) He draws an analogy to how we conceived of the notion of “life” a hundred years ago. Back then, scientists and philosophers imagined that living things were endowed with a “life force” – the French philosopher Henri Bergson called it élan vital – that distinguished living from non-living matter. But once we got a grip on genetics and microbiology, and especially the structure and function of DNA, the notion simply faded away. Living matter, it turns out, is just like non-living matter, except for being organized in a particularly complex manner. Eventually, says Hinton, as we come to see brains as machines (albeit extraordinarily complex ones), we’ll see consciousness in a similar way. Consciousness, perhaps, is simply what it feels like to be using a brain.
“Of course, a boxing referee will have his own definition [of consciousness] – but all of them are just a muddle,” says Hinton. “When we get down to doing science, it’s just a useless concept.”
And with that philosophical hurdle out of the way, there’s nothing to stop us from constructing truly intelligent machines, Hinton says. To be sure, with today’s technology, no machine can perform as well, at so many different kinds of cognitive tasks, as a real live person with a fully functional brain. But a machine that’s modelled on the brain – a machine that can recognize patterns, and learn from its mistakes, just like people do – can think, too. And it won’t be mere illusion. It’s not just that they’ll look or sound as though they’re being smart; Hinton believes they’ll actually be smart. The first signs of the coming sea change are already here, from advances in computer vision to speech recognition to the self-driving car – and Hinton is confident that the revolution in machine intelligence is only just beginning.

Born in Bristol, England, Hinton was still in high school when he began to wonder about the parallels between computers and brains. (A point of trivia: Hinton is a great-great-grandson of George Boole, the 19th-century English mathematician whose work on logic paved the way for today’s digital computers. To my mind, however, he has the facial features of Isaac Newton – at least, as one imagines Newton would have looked without the wig.) Hinton went on to earn a BA in experimental psychology from the University of Cambridge, and a PhD in artificial intelligence from the University of Edinburgh. After holding a number of teaching positions in the U.K. and the U.S., he joined the faculty at U of T in 1987, and is now the Raymond Reiter Distinguished Professor of Artificial Intelligence. In 2013 he also took a part-time position at Google, with the title of Distinguished Researcher, and Hinton, now 67, divides his time between Toronto and Google’s headquarters in California.
The brain is still very much on Hinton’s mind. His most peculiar and yet endearing habit is to run down the hallway, excitedly declaring that now, finally, he understands how the brain works. “He’s got this infectious, contagious enthusiasm,” says Richard Zemel, who did his PhD under Hinton, and now, as a faculty member, works in an office down the hall from his former supervisor. He says he’s lost count of how many times Hinton has run to his office and knocked on the door, declaring “I’ve solved it! I know what the brain is doing!” Of course, repetition would seem to take the steam out of such claims – but Hinton, with his own brand of dry humour, has that angle covered, too: Hinton, according to Zemel, will typically add: “I was wrong every other time, but this time I’m right!”
Not just anyone could get away with such shenanigans. It helps if you’re brilliant. Or, to put it another way, some of the time, your ideas have to be right. “There aren’t that many people in the world who could make these claims,” says Ruslan Salakhutdinov, another former student of Hinton’s, who, like Zemel, is now on faculty and has an office along that same hallway. “Geoff is very humble,” Salakhutdinov says. “He generates a lot of good ideas – but you’d never hear him saying ‘I developed this idea on my own,’ even though he did . . . He doesn’t take as much credit for his work as he deserves.”
Hinton is recognized as a world leader in a particular branch of artificial intelligence (AI) known as “deep learning.” In fact, he pretty much invented the field. Deep learning uses neural networks – computer programs that simulate virtual neurons, which can exchange signals with their neighbours by switching on or off (or “firing”). The strength of those connections, which determines how likely the virtual neurons are to fire, is variable, mimicking the varying strengths of the connections between neurons in the brain. The network can be trained by exposing it to massive data sets; the data can represent sounds, images or any other highly structured information. In response, the strength of certain connections increases, while others decrease. For example, two spots with lines above them could be a pair of eyes, but with nothing more to go on, that’s a very uncertain conclusion. But if there’s a dark, horizontal patch below it, which could be a mouth – then the whole thing could be a face. If there’s a nose-like path in between, and a hair-like area above, the identification becomes almost certain. (Of course, further cues are needed to know if it’s a human face, an animal face or C-3PO.)
The value of Hinton’s work is recognized far beyond the world of computers and algorithms. “Geoff Hinton is one of the most brilliant people in cognitive science,” says Daniel Dennett, who teaches at Tufts University in Massachusetts and is known for a string of popular books, including Consciousness Explained. “I think neural networks are definitely as close as we’ve come to a thinking thing.” He cautions that, as clever as neural networks are, they have yet to match the mind “of even a simple critter.” But neural networks “will almost certainly play a major role” in the future of AI and cognitive science.

Neural networks are not a new idea. In fact, the first papers on the subject go all the way back to the 1950s, when computers took up entire rooms. Hinton worked on neural networks early in his career, and almost managed to bring them into the mainstream in the 1980s, when deep learning first began to show some promise. But it didn’t quite “take.” Computers were still painfully slow, and lacked the power needed to churn through the vast swaths of data demanded by neural networks. By the 2000s, that had changed – and Hinton’s research was moving in promising new directions. He ushered in the modern era of neural network research with a 2006 paper in the journal Neural Computation, and another key paper in Science (co-authored with Salakhutdinov) a few months later. The key idea was to partition the neural network into layers, and to apply the learning algorithms to one layer at a time, approximating the brain’s own structure and function. Suddenly AI researchers had a powerful new technique at their disposal, and the field went into overdrive.

Some of the biggest breakthroughs involved machine vision. Beginning in 2010, an annual competition known as the Large Scale Visual Recognition Challenge has pitted the world’s best image-recognition programs against each other. The challenge is two-fold. First, the software has to determine whether each image in a vast database of more than a million images contains any of 1,000 objects. Second, it has to draw a box around each object every time one turns up in the database. In 2012, Hinton and two of his students used a program they called SuperVision to win the competition, beating out five other teams of researchers. MIT’s Technology Review called it a “turning point for machine vision,” noting that such systems now rival human accuracy for the first time. (There are some intriguing differences between the way computer programs and humans identify objects. The Review article states that the best machine vision algorithms struggle with objects that are slender, such as a flower stem or a pen. On the other hand, the programs are very good at distinguishing similar-looking animals – different bird species, for example, or different breeds of dog – a task that’s challenging for many humans.) The software that Hinton and his students developed can do more than classify images – it can produce text, even whole sentences, to describe each picture. In the computer science lounge, Hinton shows me an image of three different kinds of pizza slices, on a stove-top. The software generates the caption, “Two pizzas on top of a stove-top oven.” “Which isn’t right – but it’s in the ballpark,” Hinton says. Other images get correctly labeled as “a group of young people playing a game of frisbee” or “a herd of elephants walking across a dry grass field.” Standard AI, he says, “had no chance of doing this.” Older programs would have stumbled at each stage of the problem: not recognizing the image; not coming up with the right words to describe the image; not putting them together in a proper sentence. “They were just hopeless.” Now, it seems, computers can finally tell us what they’re seeing.
The practical uses for sophisticated image recognition seem almost endless, but one of Hinton’s projects is deceptively simple: getting a machine to read handwritten numbers. With great effort, he and his colleagues at Google have pulled it off. The software they’ve developed lets Google read the street addresses on people’s homes (vital for connecting the data from its “map” function to its “street view” function). “The techniques we developed for that are now the best way of reading numbers ‘in the wild,’” Hinton says. “And the neural nets are now just slightly better than people at reading those numbers.”
Other potential applications for machine vision include medical diagnostics, law enforcement, computer gaming, improved robots for manufacturing and enhanced vehicle safety. Self-driving cars are one of Google’s major current projects. But even when there’s a human behind the wheel, automated systems can check to make sure no pedestrian or cyclist is in front of the car – and even apply the brakes, automatically, if necessary.
On the medical front, another potentially life-saving application is in pharmaceutical design. Drug companies experiment with countless chemicals, involving molecules of different shapes. Predicting whether a complicated molecule will bind to another molecule is maddeningly difficult for a human chemist, even when aided by computer-generated models – but it may soon be fairly easy for sophisticated neural networks, trained to recognize the right patterns. In 2012, Merck, the pharmaceutical giant, sponsored a competition to design software to find molecules that might lead to new drugs. A team of U of T graduate students, mentored by Hinton, won the top prize. Using data describing thousands of different molecular shapes, they determined which molecules were likely to be effective drug agents.
Between the machine vision prize and the Merck prize, 2012 was obviously a good year for Hinton. As it happens, it was also the year that he won a $100,000 Killam Prize from the Canada Council for the Arts, for his work on machine learning. In 2010, he’d won the Herzberg Canada Gold Medal for Science and Engineering, which comes with a $1 million research grant over a five-year period.
Pictures are made up of distinct patterns – and so too are sounds, which means that speech recognition is a prime target for neural networks. In fact, anyone using Google’s Android phone already has access to a speech recognition system developed by Hinton and his students. Launched in 2012, the feature is called Google Now – roughly comparable to the Siri personal digital assistant that runs on iPhones – and can also be found on the Google Chrome web browser on personal computers. Popular Science named it the “Innovation of the Year” for 2012. Ask Google Now a question, and it combs the Internet for an answer, which it delivers in clear English sentences.
Recognizing speech is a good start; converting speech to text is also invaluable. And then there’s text-based machine translation – the task of translating one written language into another. One way of doing that – the old way – is to scour the Internet for words and phrases that have already been translated, use those translations, and piece together the results (and so every time you input “please,” the French output will be “s’il vous plaît”). “That’s one way to do machine translation – but it doesn’t really understand what you’re saying,” says Hinton. A more sophisticated approach, he says, is to feed lots of English and French sentences into a neural network. When you give it an English sentence for translation, the network predicts the likely first word in French. If it is told the true first word in French, it can then predict the likely next word in French, and so on. “After a lot of training, the predictions become very good,” says Hinton. “And this works as well as the best translation systems now, on a mediumsized database. I think that’s an amazing breakthrough.”
An even bigger breakthrough would be to skip the textual mediator, and translate speech from one language directly into speech from another language. In fact, Microsoft unveiled a demonstration version of such a system last year; the company has added it as a “preview” feature to its popular Skype communications platform. Hinton, however, says the Microsoft translator is still fairly rudimentary. “What you really want is to put something in your ear, and you talk in French, and I hear it in English.” As soon as Hinton mentions this, I immediately think of Douglas Adams and his comedic science fiction classic, The Hitchhiker’s Guide to the Galaxy. In the Hitchhiker’s Guide, Adams describes a “small, yellow, leech-like” fish, which, when placed in the ear, functions as a universal translator: With the Babel fish in place, “you can instantly understand anything said to you in any form of language.” Hinton is clearly a Douglas Adams fan, too. Yes, he says, a mechanical version of Adams’ fictitious fish is exactly the technology that he’s talking about. He adds, somewhat optimistically: “That will make a big difference; it will improve cultural understanding.” (In the Hitchhiker’s Guide, the Babel fish has the opposite effect, causing “more and bloodier wars than anything else in the history of creation.”)
Hinton and the other experts I spoke with emphasized the benefits of machine intelligence, but there’s long been a dark side surrounding such technology. Machines may improve our lives, but they can also take lives. This spring, a week-long conference was held at the United Nations in Geneva, where delegates considered the question of autonomous drones making life-and-death decisions in combat, and carrying out attacks without direct human involvement. (As usual, the science fiction writers were the first to explore this territory, with killer machines being a sci-fi staple from The Terminator to The Matrix.) Hinton is well aware that the largest investor in machine learning is the U.S. Department of Defense. He refuses to take money from the U.S. military, but understands that there is nothing to stop them – or anyone – from implementing his ideas.
But Hinton’s tone is positive. Major societal changes are coming, thanks to machine learning, and those changes will do more good than harm. It’s been a long wait. Hinton – and, in fact, all of the researchers that I spoke with – acknowledge AI’s

Рептилач
17:45:39 June 13, 2015

Два смертника-шахида в Пакистане взорвали друг-друга в результате ссоры.

Инцидент произошел в провинции Пенджаб. Согласно очевидцам, двое шахидов сидели на скамейке и были вовлечены в дискуссию, которая переросла в ссору. Результатом ссоры стал подрыв, по меньшей мере, одного из них. Больше никто не пострадал, хотя нанесен ущерб скамейке.

Мораль сей басни такова - командная работа среди шахидов требует очень серьезной координации :)

http://www.dailymail.co.uk/news/article-3117238/Two-suicide-bombers-blow-getting-fight-one-putting-explosive-vests.html

Рептилач
12:22:44 June 12, 2015

Амнезия: воспоминания не стираются, просто их трудно пробудить — без оптогенетики:

#рептилач #scientific #optogenetics

Мыши, которым блокируют «переписывание» воспоминаний из краткосрочной памяти в долгосрочную, быстро забывают, чему они научились. Но оказывается, даже у таких животных полученный опыт не полностью стирается из мозга. Нервные цепочки, в которых «записано» воспоминание, у животных с амнезией сохраняются, но связи между такими клетками непрочные. А самое главное — не формируется система связей, которая должна активировать нужную цепочку в ответ на определенные стимулы. Тем не менее с помощью оптогенетики можно пробудить и такие «спящие» воспоминания.

Рептилач
11:33:56 June 12, 2015

Ставь лойс если все твои истории начинаются с этих слов:

Рептилач
15:06:30 June 06, 2015

УВАЖАЕМЫЙ, ЧЬЯ МАТЬ СВЯТА! СЯДЬ ПОСИДИ, БАТОНЧИК ШОКОЛАДНЫЙ! А? ПО МУДРОСТИ СВОЕЙ РЕШИЛ УЙТИ, ТЫ ЧИСТОПЛЮЙ БЛАГОУХАЮЩИЙ, ЧЬЯ МАТЬ ОТЛИЧНО ГОТОВИТ, К СЛОВУ! А? НУ ИДИ СЯДЬ ПОСИДИ ПОПРОБУЙ МНЕ ОТДАТЬСЯ! Я ТЕБЕ САМ ОТДАМСЯ! АРИСТОКРАТ, НОФАПОФОНЩИК ГОСПОДЕНЬ, БУДЬ ТЫ ВОСХВАЛЕН ВО ВЕКИ ВЕКОВ! СЯДЬ, ГЕНИЙ! ВЕРНИ ДЕВСТВЕННОСТЬ СЕБЕ И ВСЕЙ СВОЕЙ СЕМЬЕ. СЛАДКИЙ МОЙ, РАСТОЧИТЕЛЬ ЩЕДРНЫЙ! САХАРОК, ЧЕЛОВЕЧИЩЕ, БЛАГОЧЕСТИВЕЦ! САДИСЬ СЮДА, УВАЖАЕМЫЙ, ФИЛАНТРОП, МЕЦЕНАТ! САДИСЬ ПОСИДИ, ТЫ ГЕРОЙ, ГОЛОВА!

Рептилач
17:37:35 June 04, 2015

#рептилач #original

Солнечная система по-рептилачу. В следующих выпусках ждите:

Галактика млечный путь, галактический кластер, суперкластер, и мультивселенная по-рептилачу.

Рептилач
18:34:35 June 02, 2015

Продолжительная болезнь, подсадившая голос Аллаха, все же не помешала записать третью главу книги, но дьявольски замедлила выход материала.

Рептилач
02:38:00 June 01, 2015

На Алтае ребенка неделю лечили от энцефалита в церкви:

На Алтае ребенка неделю лечили от энцефалита в церкви
На Алтае ребенка неделю лечили от энцефалита в церкви

На Алтае родители годовалого мальчика неделю пытались вылечить его от энцефалита в церкви, после чего ребенка госпитализировали в критическом состоянии

http://rufabula.com/news/2015/05/31/encephalitis-church
Рептилач
19:32:28 May 30, 2015

Галлюцинации нейронных сеток:

#рептилач #scientific

Нейронные сетки потихоньку учатся галлюцинировать, их сны пока всего лишь забавны и бесполезны, но эта забавность сугубо временная...

Рептилач
20:47:23 May 29, 2015

Молекулярщики омолодили клетки 97-летнего человека:

#рептилач #scientific #news

Сотрудники университета Цукуба опровергли митохондриальную теорию старения, согласно которой замедление метаболических процессов в клетке связано с накоплением мутаций в ДНК митохондрий. Ученые представили новое объяснение механизма старения, а также смогли «омолодить» клетки 97-летнего человека.

Авторы исследования изучили количество митохондриальных мутаций в клетках молодых и пожилых людей. Согласно данным, представленным в журнале Science Reports, ученые не нашли существенных различий в количестве мутаций в ДНК митохондрий двух клеточных линий. В связи с этим авторы исследования предложили новое объяснение процессу клеточного старения. По их мнению, на этот механизм влияют эпигенетические факторы, не затрагивающие первичную последовательность ДНК.

Ученые проанализировали регионы митохондриальной ДНК и нашли две области, на которые, по их мнению, могут влиять эпигенетические факторы, — например, белки. Этими регионами оказались гены, отвечающие за производство аминокислоты глицина. Авторы исследования на 10 дней поместили клетки 97-летнего человека в среду, богатую глицином. Это позволило «омолодить» клетки. Теперь ученые надеются провести аналогичное исследование влияния эпигетенических факторов на старение целого организма.

Рептилач
19:35:32 May 29, 2015

В Волгограде 10-летняя школьница сожгла себе лицо, чтобы удивить подруг:

#рептилач #BYDLO #news

Рептилач
16:21:28 May 29, 2015

Физикам ЦЕРНА недавно удалось достоверно наблюдать чрезвычайно редкое событие — распад странного B-мезона на два мюона. В самых благоприятных условиях оно происходит всего в четырех случаях из миллиарда.

#рептилач #news #scientific

С фронта элементарных частиц время от времени поступают рапорты об обнаружении — или же о безуспешном поиске — разных редких процессов. Вот и на днях прошло сообщение об окончательном открытии сверхредкого распада мезонов, за которым физики охотились несколько десятилетий.

Разные редкие вещи встречаются и в других разделах науки — кто-то находит редкие минералы, кто-то — редких животных, где-то наблюдаются редкие атмосферные явления. Каждая такая единичная находка ценна и при внимательном изучении дает много информации. Но в микромире охота за «редкостями» имеет совсем иной смысл — ведь причина редкости тут другая.

В мире макроскопических, даже уточним, механических явлений царит воспроизводимость. Мы настолько к этому привыкли, что с трудом представляем, как может быть иначе. Солнце не мечется по небу, а восходит и заходит с завидной регулярностью. Брошенный камень летит по параболе, скорректированной сопротивлением воздуха, и падает туда, куда должен. Велосипедист спокойно едет под горку, не опасаясь, что велосипед вдруг телепортируется вбок или подпрыгнет вверх на метр. Мы знаем, что так было всегда, и мы уверены, что так будет и дальше. В более общей формулировке, если два эксперимента поставлены в совершенно одинаковых условиях, они приведут к одинаковым результатам. То же редкое атмосферное явление бывает редким, потому что оно зависит от одновременного выполнения множества условий. Но если условия повторятся точь-в-точь, то и само явление снова будет наблюдаться.
В микромире всё устроено по-другому. Поведение элементарных частиц — вероятностное, оно зависит от игры случая, игры непредсказуемой и непросчитываемой в каждом конкретном эксперименте. Вы можете столкнуть два протона в коллайдере в абсолютно идентичных условиях — а результат этих столкновений будет разный, и нет никакой возможности его предсказать в каждом конкретном случае. И это не потому, что мы плохо умеем считать, а потому что таковы законы природы. Всё, что доступно теоретическому расчету — это вероятность того или иного результата из всего набора разрешенных возможностей. Именно этим занимаются теоретики, и именно это измеряют экспериментаторы, когда они хотят проверить теоретические предположения об устройстве мира.

Редкие процессы с элементарными частицами — это просто процессы с очень маленькой вероятностью. Создадите вы для них максимально благоприятные условия или нет, они всё равно будут очень редки. А вот почему эти процессы идут так неохотно — и есть самый главный вопрос в этой науке, и именно ради него физики тратят столько усилий. Про этот вопрос и про некоторые ответы на него мы поговорим позже, а пока что давайте прикинем, какие масштабы «редкости» нормальны в физике частиц, а какие — нет.

Рептилач
11:35:48 May 29, 2015

Биоинженеры научились получать ДНК-структуры, сборкой и разборкой которых можно управлять.

#рептилач #scientific