From WikiChip
Search results

  • '''Artificial Intelligence:''' '''Artificial Intelligence'''
    9 KB (1,150 words) - 00:03, 2 October 2022
  • ...ng]] algorithms, typically by operating on [[predictive models]] such as [[artificial neural network]]s (ANNs) or [[random forest]]s (RFs). ...processing unit'' (''TPU''), ''neural network processor'' (''NNP'') and ''intelligence processing unit'' (''IPU'') as well as ''vision processing unit'' (''VPU'')
    5 KB (640 words) - 16:27, 26 September 2023
  • |market=Artificial Intelligence ...so for training of neural networks, suitable for working with the common [[artificial neural network|ANNs]] such as [[convolutional neural network|CNN]], [[recur
    4 KB (603 words) - 09:59, 11 August 2018
  • * Artificial Intelligence
    1 KB (171 words) - 20:29, 19 November 2017
  • |market=Artificial Intelligence * Holler, Mark, et al. "An electrically trainable artificial neural network (ETANN) with 10240 floating gate synapses." International Jo
    4 KB (568 words) - 17:12, 11 February 2018
  • |market=Artificial Intelligence Loihi is fabricated on Intel's [[14 nm process]] and has a total of 130,000 artificial neurons and 130 million synapses. In addition to the 128 neuromorphic cores
    12 KB (1,817 words) - 01:28, 1 October 2021
  • ...|AVX512VNNI}} which is designed to improve the performance of [[Artificial Intelligence]] workloads by improving the throughput of tight inner convolutional loop o
    32 KB (4,535 words) - 05:44, 9 October 2022
  • ...nce computing designed specifically for the [[acceleration]] of artificial intelligence workloads.
    3 KB (388 words) - 02:47, 20 May 2019
  • ...te [[2017]] specifically designed to [[accelerator|accelerate]] artificial intelligence workloads. This processor, which is fabricated on their own [[14 nm process
    3 KB (495 words) - 10:28, 10 May 2019
  • ...te [[2017]] specifically designed to [[accelerator|accelerate]] artificial intelligence workloads. This processor, which is fabricated on their own [[14 nm process
    3 KB (495 words) - 10:28, 10 May 2019
  • ...te [[2017]] specifically designed to [[accelerator|accelerate]] artificial intelligence workloads. This processor, which is fabricated on their own [[14 nm process
    3 KB (495 words) - 10:28, 10 May 2019
  • ...abless]] [[American]] semiconductor company that specializes in Artificial Intelligence and development of [[neural processors]]. Nervana-based product development ...e of the earliest startups to specialize in the acceleration of Artificial Intelligence workloads. Nervana first-generaiton product sampled in limited quantities a
    2 KB (194 words) - 09:01, 11 June 2021
  • |market=Artificial Intelligence
    8 KB (1,263 words) - 03:08, 9 December 2019
  • |family=Vision Intelligence == Artificial Intelligence Engine ==
    3 KB (302 words) - 22:05, 12 April 2018
  • |family=Vision Intelligence == Artificial Intelligence Engine ==
    3 KB (302 words) - 22:05, 12 April 2018
  • |market=Artificial Intelligence
    2 KB (215 words) - 10:19, 19 May 2018
  • ...features include advanced camera functionalities and Qualcomm's Artificial Intelligence Engine (AIE).
    3 KB (366 words) - 05:51, 3 October 2022
  • ...years, the company has increased its presence in the area of [[artificial intelligence]].
    721 bytes (79 words) - 00:31, 5 July 2018
  • ...neural processors]]. [[Bitmain]] started exploring the field of artificial intelligence and neural processors as early as [[2015]]. By April 2017, their first prod
    3 KB (413 words) - 12:02, 25 December 2018
  • |market=Artificial Intelligence
    713 bytes (92 words) - 00:44, 16 September 2018

View (previous 20 | next 20) (20 | 50 | 100 | 250 | 500)