IKH

Tensors

tensor is the fundamental data structure used in TensorFlow. It is a multidimensional array with a uniform data type. The data type for an entire tensor is the same. 

So, what impact does this have on the ML process? 

In the case of data frames, all the raw data, such as integers, strings and floats, can be loaded into a single data frame. So, you could load raw data into a data frame and then process the data to convert it into a numerical form for ML. In the case of tensors, data would need to be loaded into another data structure and processed first. And when you are ready to learn from the data, you can load it into a tensor. 

Now, in the next video, you will learn more about tensors from Avishek.

So, in this video, you learnt that tensors are n-dimensional arrays that are quite similar to NumPy arrays. An important difference between these is their performance. NumPy is a highly efficient library that is designed to work on CPUs. On the other hand, TensorFlow can work on CPUs and GPUs. So, if you have a compatible GPU, then it is highly likely that TensorFlow will outperform NumPy.

In most use cases in ML, you will use either 2D or 3D tensors. A 2D tensor is equivalent to a matrix. It can be used to represent a feature matrix, with each column being a feature and each row being a data point. A 2D tensor would suffice most ML needs. You might want to convert a higher-dimension tensor to a 2D tensor for learning tasks. Recall the ML algorithms covered so far in this course; the data sets for all of them were in a matrix form, where each row represented a data point and each column represented a feature. This is how all algorithms are designed to work. 

You can declare two types of tensors in TensorFlow. In the next video, Avishek will explain this.

Now, let’s summarise the differences between these two types of tensors:

  1. The values of constant tensors cannot be changed once they are declared but those of variable tensors can be.
     
  2. Constant tensors need to be initialised with a value while they are being declared, whereas variable tensors can be declared later using operations.
     
  3. Differentiation is calculated for variable tensors only, and the gradient operation ignores constants while differentiating. 

Note:  tf.constant is similar to tf.Tensor; both of them have immutable values, but tf.Variable differs from both. Whenever you declare a tensor with tf.constant, it will be an object of the tf.tensor type compared with tf.Variable, which is a different object altogether. You can visit the tf.constant page of the documentation to understand this better.

Now, answer these questions based on what you learnt in this segment.

In the next video, you will learn how to declare tensors in TensorFlow.

In this video, Avishek demonstrated the use of tf.constant to declare tensors. Here are the key observations from the video:

  1. The version of TensorFlow that we use is 2.2.0 for the demonstrations in this module. Ensure that you also use the same version when you practise coding. To make sure that you always import the correct version of TensorFlow, use this segment of code:tf.__version__ #In case another version is present,uninstall the current version and reinstall version 2.2.0 pip uninstall tensorflow pip install tensorflow==2.2.0
  2. Three different ranks of tensors were initialised using a list of integer numbers. Note the way in which the tensor with three dimensions was initialised. The two points to note here are: first, the number of square brackets, and second, the use of multiplication. Apart from the brackets needed to declare the 2D array, there is one extra pair of brackets, which tells TensorFlow that the rank of the tensor being initialised is 3. By multiplying the array by 5, the same array was repeated five times to give the values to the rank-3 tensors. Similarly, you can use a single element or a single row of values to create a tensor of your desired dimension.
     
  3. Whenever a tensor is printed, you will notice the following:
    1. Its values
    2. Its shape
    3. Its data type
       
  4. TensorFlow can autodetect the data type of a tensor based on its values. The following two outcomes can happen if there is variability in the data types of the given values:
    1. The different data types can be combined into one. For example, if a few of the declared numbers are integers and a few are float numbers, then TensorFlow will make all of them floats.
    2. The data types cannot be combined. For example, strings and floats cannot be combined. In such a case, TensorFlow will show an error.

In short, the entire tensor needs to have the same data type. 

Now, answer these questions based on what you learnt in this segment.

So, in this segment, you learnt about the methods of declaring tensors of different ranks. Nevertheless, all the tensors declared in this segment were of the tf.constant type. In the next segment, you will learn how to declare tf.Variable tensors.

Report an error