How to change default block size in Hadoop Cluster?

Big Data Hadoop

Hi Guys,

I have configured Hadoop Cluster. I want to change the default block size. How can I do that?

2
Answers

Replies

Yes, it is possible to change the default block size in the Hadoop cluster. It is possible from the client end. The client will decide the block size as per their requirements. You need to navigate to the Hadoop configuration folder in the client node. Now its your turn to set one property as represented below:


<property>

<name>dfs.block.size</name>

<value>1024</value>

</property>
 
 

If you want to unleash your potential in this competitive field, please visit the Big Data Hadoop course page for more information, where you can find the Big Data Hadoop tutorials and Big Data Hadoop frequently asked interview questions and answers as well.

 

This topic has been locked/unapproved. No replies allowed

Login to participate in this discussion.

Leave a reply

Before proceeding, please check your email for a verification link. If you did not receive the email, click here to request another.