You will probably hear more about 'transformers' in artificial intelligence. This article describes them. To date, successful neural networks have worked through 'feature detection' - a convolutional neural network (CNN), for example, would focus on a small area and try to detect an edge (or a phrase, or a morpheme), gradually building a whole model out of parts. Transformers, by contrast, "runs processes so that every element in the input data connects, or pays attention, to every other element." This is called 'self-attention.' Transformers have performed quite well compared to more traditional neural network processes. And they're more 'universal' - the same approach used for image processing also works for natural language processing.
Today: 1 Total: 84 [Share]
] [