Tensors In Neural Networks

Original Source Here

Tensors In Neural Networks

Have you ever wondered how information is represented within a neural network?

All the inputs, outputs, and transformations are represented by data structures, referred to as tensors.Now you probably are wondering, what are really tensors?

The term “tensor” is more a mathematical generalization of many concepts rather than one specific thing.

If you’ve programmed before, you are most likely familiar with the terms “number”, “array/list”, “2D array”.

If you have studied math, you probably have heard the terms “scalar”, “vector”, and “matrix”.

Well, what do all these terms have in common? A number is the same thing as a scalar, an array or list is the same thing as a vector, and a 2D array is the same thing as a matrix. Another similarity is that all these terms refer to different types of tensors.

However, once we move to machine learning, we kind of throw out these computer science/mathematical terms because we can use one term to refer to different dimensions of tensors. Instead we refer to tensors as an ND array or ND tensor, ND standing for the number of dimensions. Note that in neural network programming, tensors are mainly multi dimensional arrays.

Essentially:

a number is a 0 dimensional array/tensor

an array is a 1 dimensional array /tensor

and a 2D array is a 2 dimensional array/tensor

Something important to note is that the dimensions of our array/tensor do not tell us how many elements exist within them. Just because you have a 2D array doesn’t mean you can only have 2 values.

x = [[2, 3]] #2D arrayy = [[2, 3, 4, 5], [3, 4, 5, 6]] #also a 2D array

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: