Иностранные Бердышев

Материал из Wiki
(Различия между версиями)
Перейти к: навигация, поиск
 
Строка 1: Строка 1:
На сайте https://www.scopus.com:
+
1)
 +
https://www.scopus.com/record/display.uri?eid=2-s2.0-0032482432&origin=resultslist&sort=cp-f&src=s&st1=neural+network&nlo=&nlr=&nls=&sid=679b26cac29a36193382ff45f66bb4ec&sot=b&sdt=cl&cluster=scosubtype%2c%22ar%22%2ct&sl=29&s=TITLE-ABS-KEY%28neural+network%29&relpos=0&citeCnt=24191&searchTerm=
 +
Collective dynamics of 'small-world9 networks(Article)
  
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
+
    Watts, D.J., Strogatz, S.H. View Correspondence (jump link)
We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet/.
+
 
 +
    Department of Theoretical and Applied Mechanics, Kimball Hall, Cornell University, Ithaca, NY 14853, United States
 +
Процентиль важности: 96.161
 +
 
 +
2)
 +
https://www.scopus.com/record/display.uri?eid=2-s2.0-84876231242&origin=resultslist&sort=cp-f&src=s&st1=neural+network&nlo=&nlr=&nls=&sid=e9776e07715749d9aac159285aa4bdba&sot=b&sdt=b&sl=29&s=TITLE-ABS-KEY%28neural+network%29&relpos=1&citeCnt=29206&searchTerm=
 +
 +
Advances in Neural Information Processing SystemsVolume 2, 2012, Pages 1097-110526th Annual Conference on Neural Information Processing Systems 2012, NIPS 2012; Lake Tahoe, NV; United States; 3 December 2012 до 6 December 2012; Код 96883
 +
ImageNet classification with deep convolutional neural networks(Conference Paper)
 +
 
 +
    Krizhevsky, A.Email Author, Sutskever, I.Email Author, Hinton, G.E.Email Author View Correspondence (jump link)
 +
 
 +
    University of Toronto, Canada
 +
 
 +
Краткое описание Просмотр пристатейных ссылок (26)
 +
 
 +
We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.
 +
Важность темы SciVal
 +
Тема: Convolution | Neural networks | Convolutional network
 +
Процентиль важности: 99.989
 +
 +
 
 +
 
 +
We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.
 +
 
 +
3)
 +
https://www.scopus.com/record/display.uri?eid=2-s2.0-0032482432&origin=resultslist&sort=cp-f&src=s&st1=neural+network&nlo=&nlr=&nls=&sid=e9776e07715749d9aac159285aa4bdba&sot=b&sdt=b&sl=29&s=TITLE-ABS-KEY%28neural+network%29&relpos=2&citeCnt=24191&searchTerm=
 +
Collective dynamics of 'small-world9 networks(Article)
 +
 
 +
    Watts, D.J., Strogatz, S.H. View Correspondence (jump link)
 +
 
 +
    Department of Theoretical and Applied Mechanics, Kimball Hall, Cornell University, Ithaca, NY 14853, United States
 +
 
 +
Краткое описание Просмотр пристатейных ссылок (27)
 +
 
 +
Networks of coupled dynamical systems have been used to model biological oscillators 1-4, Josephson junction arrays5,6, excitable media7, neural networks8-10, spatial games", genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks 'rewired' to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them 'small-world' networks, by analogy with the small-world phenomenon13'14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices.
 +
Важность темы SciVal
 +
Тема: Models | Complex networks | Preferential attachment
 +
Процентиль важности: 96.161
 +
 +
4)
 +
https://www.scopus.com/record/display.uri?eid=2-s2.0-0242490780&origin=resultslist&sort=cp-f&src=s&st1=neural+network&nlo=&nlr=&nls=&sid=679b26cac29a36193382ff45f66bb4ec&sot=b&sdt=cl&cluster=scosubtype%2c%22ar%22%2ct&sl=29&s=TITLE-ABS-KEY%28neural+network%29&relpos=5&citeCnt=11768&searchTerm=
 +
Collective dynamics of 'small-world9 networks(Article)
 +
 
 +
    Watts, D.J., Strogatz, S.H. View Correspondence (jump link)
 +
 
 +
    Department of Theoretical and Applied Mechanics, Kimball Hall, Cornell University, Ithaca, NY 14853, United States
 +
 
 +
Краткое описание Просмотр пристатейных ссылок (27)
 +
 
 +
Networks of coupled dynamical systems have been used to model biological oscillators 1-4, Josephson junction arrays5,6, excitable media7, neural networks8-10, spatial games", genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks 'rewired' to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them 'small-world' networks, by analogy with the small-world phenomenon13'14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices.
 +
Важность темы SciVal
 +
Тема: Models | Complex networks | Preferential attachment
 +
Процентиль важности: 96.161
 +
 +
5)
 +
https://www.scopus.com/record/display.uri?eid=2-s2.0-33745805403&origin=resultslist&sort=cp-f&src=s&st1=neural+network&nlo=&nlr=&nls=&sid=679b26cac29a36193382ff45f66bb4ec&sot=b&sdt=cl&cluster=scosubtype%2c%22ar%22%2ct&sl=29&s=TITLE-ABS-KEY%28neural+network%29&relpos=12&citeCnt=7295&searchTerm=
 +
A fast learning algorithm for deep belief nets(Article)
 +
 
 +
    Hinton, G.E.aEmail Author, Osindero, S.aEmail Author, Teh, Y.-W.bEmail Author View Correspondence (jump link)
 +
 
 +
    aDepartment of Computer Science, University of Toronto, Toronto, M5S 3G4, Canada
 +
    bDepartment of Computer Science, National University of Singapore, Singapore 117543, Singapore
 +
 
 +
Краткое описание Просмотр пристатейных ссылок (21)
 +
 
 +
We show how to use "complementary priors" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind. © 2006 Massachusetts Institute of Technology.
 +
Важность темы SciVal
 +
Тема: Neural networks | Learning systems | Deep belief
 +
Процентиль важности: 99.641
 +
  
На сайте https://link.springer.com
 
  
Meta-path-based outlier detection in heterogeneous information network
 
Mining outliers in heterogeneous networks is crucial to many applications, but challenges abound. In this paper, we focus on identifying meta-path-based outliers in heterogeneous information network (HIN), and calculate the similarity between different types of objects. We propose a meta-path-based outlier detection method (MPOutliers) in heterogeneous information network to deal with problems in one go under a unified framework. MPOutliers calculates the heterogeneous reachable probability by combining different types of objects and their relationships. It discovers the semantic information among nodes in heterogeneous networks, instead of only considering the network structure. It also computes the closeness degree between nodes with the same type, which extends the whole heterogeneous network. Moreover, each node is assigned with a reliable weighting to measure its authority degree. Substantial experiments on two real datasets (AMiner and Movies dataset) show that our proposed method is very effective and efficient for outlier detection.
 
  
 
ВЫВОД
 
ВЫВОД
 
Актуальность исследований в этом направлении подтверждается массой различных применений ИНС. Это автоматизация процессов распознавания образов, адаптивное управление, аппроксимация функционалов, прогнозирование, создание экспертных систем, организация ассоциативной памяти и многие другие приложения. С помощью ИНС можно, например, предсказывать показатели биржевого рынка, выполнять распознавание оптических или звуковых сигналов, создавать самообучающиеся системы, способные управлять автомашиной при парковке или синтезировать речь по тексту. В то время как на западе применение нейросетей уже достаточно обширно, у нас это еще в некоторой степени экзотика – российские фирмы, использующие нейросети в практических целях, можно пересчитать по пальцам.
 
Актуальность исследований в этом направлении подтверждается массой различных применений ИНС. Это автоматизация процессов распознавания образов, адаптивное управление, аппроксимация функционалов, прогнозирование, создание экспертных систем, организация ассоциативной памяти и многие другие приложения. С помощью ИНС можно, например, предсказывать показатели биржевого рынка, выполнять распознавание оптических или звуковых сигналов, создавать самообучающиеся системы, способные управлять автомашиной при парковке или синтезировать речь по тексту. В то время как на западе применение нейросетей уже достаточно обширно, у нас это еще в некоторой степени экзотика – российские фирмы, использующие нейросети в практических целях, можно пересчитать по пальцам.

Текущая версия на 13:06, 24 декабря 2019

1) https://www.scopus.com/record/display.uri?eid=2-s2.0-0032482432&origin=resultslist&sort=cp-f&src=s&st1=neural+network&nlo=&nlr=&nls=&sid=679b26cac29a36193382ff45f66bb4ec&sot=b&sdt=cl&cluster=scosubtype%2c%22ar%22%2ct&sl=29&s=TITLE-ABS-KEY%28neural+network%29&relpos=0&citeCnt=24191&searchTerm= Collective dynamics of 'small-world9 networks(Article)

   Watts, D.J., Strogatz, S.H. View Correspondence (jump link) 
   Department of Theoretical and Applied Mechanics, Kimball Hall, Cornell University, Ithaca, NY 14853, United States

Процентиль важности: 96.161

2) https://www.scopus.com/record/display.uri?eid=2-s2.0-84876231242&origin=resultslist&sort=cp-f&src=s&st1=neural+network&nlo=&nlr=&nls=&sid=e9776e07715749d9aac159285aa4bdba&sot=b&sdt=b&sl=29&s=TITLE-ABS-KEY%28neural+network%29&relpos=1&citeCnt=29206&searchTerm=

Advances in Neural Information Processing SystemsVolume 2, 2012, Pages 1097-110526th Annual Conference on Neural Information Processing Systems 2012, NIPS 2012; Lake Tahoe, NV; United States; 3 December 2012 до 6 December 2012; Код 96883 ImageNet classification with deep convolutional neural networks(Conference Paper)

   Krizhevsky, A.Email Author, Sutskever, I.Email Author, Hinton, G.E.Email Author View Correspondence (jump link) 
   University of Toronto, Canada

Краткое описание Просмотр пристатейных ссылок (26)

We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. Важность темы SciVal Тема: Convolution | Neural networks | Convolutional network Процентиль важности: 99.989


We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

3) https://www.scopus.com/record/display.uri?eid=2-s2.0-0032482432&origin=resultslist&sort=cp-f&src=s&st1=neural+network&nlo=&nlr=&nls=&sid=e9776e07715749d9aac159285aa4bdba&sot=b&sdt=b&sl=29&s=TITLE-ABS-KEY%28neural+network%29&relpos=2&citeCnt=24191&searchTerm= Collective dynamics of 'small-world9 networks(Article)

   Watts, D.J., Strogatz, S.H. View Correspondence (jump link) 
   Department of Theoretical and Applied Mechanics, Kimball Hall, Cornell University, Ithaca, NY 14853, United States

Краткое описание Просмотр пристатейных ссылок (27)

Networks of coupled dynamical systems have been used to model biological oscillators 1-4, Josephson junction arrays5,6, excitable media7, neural networks8-10, spatial games", genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks 'rewired' to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them 'small-world' networks, by analogy with the small-world phenomenon13'14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices. Важность темы SciVal Тема: Models | Complex networks | Preferential attachment Процентиль важности: 96.161

4) https://www.scopus.com/record/display.uri?eid=2-s2.0-0242490780&origin=resultslist&sort=cp-f&src=s&st1=neural+network&nlo=&nlr=&nls=&sid=679b26cac29a36193382ff45f66bb4ec&sot=b&sdt=cl&cluster=scosubtype%2c%22ar%22%2ct&sl=29&s=TITLE-ABS-KEY%28neural+network%29&relpos=5&citeCnt=11768&searchTerm= Collective dynamics of 'small-world9 networks(Article)

   Watts, D.J., Strogatz, S.H. View Correspondence (jump link) 
   Department of Theoretical and Applied Mechanics, Kimball Hall, Cornell University, Ithaca, NY 14853, United States

Краткое описание Просмотр пристатейных ссылок (27)

Networks of coupled dynamical systems have been used to model biological oscillators 1-4, Josephson junction arrays5,6, excitable media7, neural networks8-10, spatial games", genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks 'rewired' to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them 'small-world' networks, by analogy with the small-world phenomenon13'14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices. Важность темы SciVal Тема: Models | Complex networks | Preferential attachment Процентиль важности: 96.161

5) https://www.scopus.com/record/display.uri?eid=2-s2.0-33745805403&origin=resultslist&sort=cp-f&src=s&st1=neural+network&nlo=&nlr=&nls=&sid=679b26cac29a36193382ff45f66bb4ec&sot=b&sdt=cl&cluster=scosubtype%2c%22ar%22%2ct&sl=29&s=TITLE-ABS-KEY%28neural+network%29&relpos=12&citeCnt=7295&searchTerm= A fast learning algorithm for deep belief nets(Article)

   Hinton, G.E.aEmail Author, Osindero, S.aEmail Author, Teh, Y.-W.bEmail Author View Correspondence (jump link) 
   aDepartment of Computer Science, University of Toronto, Toronto, M5S 3G4, Canada
   bDepartment of Computer Science, National University of Singapore, Singapore 117543, Singapore

Краткое описание Просмотр пристатейных ссылок (21)

We show how to use "complementary priors" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind. © 2006 Massachusetts Institute of Technology. Важность темы SciVal Тема: Neural networks | Learning systems | Deep belief Процентиль важности: 99.641



ВЫВОД Актуальность исследований в этом направлении подтверждается массой различных применений ИНС. Это автоматизация процессов распознавания образов, адаптивное управление, аппроксимация функционалов, прогнозирование, создание экспертных систем, организация ассоциативной памяти и многие другие приложения. С помощью ИНС можно, например, предсказывать показатели биржевого рынка, выполнять распознавание оптических или звуковых сигналов, создавать самообучающиеся системы, способные управлять автомашиной при парковке или синтезировать речь по тексту. В то время как на западе применение нейросетей уже достаточно обширно, у нас это еще в некоторой степени экзотика – российские фирмы, использующие нейросети в практических целях, можно пересчитать по пальцам.

Персональные инструменты
Пространства имён

Варианты
Действия
Навигация
Инструменты