My new Accumulo 2 install can't find hdfs

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

My new Accumulo 2 install can't find hdfs

Jeffrey Zeiberg
I know that NameNode and DataNode are running and  I know I have a file
system from this:

[jzeiberg@sr-linux2 conf]$ hadoop fs -ls hdfs://localhost:9000/
Found 4 items
-rw-rw-rw-   1 hdfs     supergroup          0 2018-12-05 10:57
hdfs://localhost:9000/MY_HADOOP_CLUSTER
drwx------   - jzeiberg supergroup          0 2018-12-12 13:54
hdfs://localhost:9000/accumulo
drwxr-xr-x   - jzeiberg supergroup          0 2018-12-05 14:14
hdfs://localhost:9000/failures
drwxr-xr-x   - jzeiberg supergroup          0 2018-12-05 14:14
hdfs://localhost:9000/import


But when I run either of these commands:
accumulo-service master start
    or
accumulo-cluster restart


I get this error for the master service (the others start fine):

018-12-12 13:28:33,449 [conf.SiteConfiguration] INFO : Found Accumulo
configuration on classpath at
/home/jzeiberg/accumulo/accumulo-2.0.0-SNAPSHOT/conf/accumulo.properties
2018-12-12 13:28:33,643 [zookeeper.ZooUtil] ERROR: Problem reading instance
id out of hdfs at hdfs://localhost:9000/accumulo/instance_id
java.io.IOException: No FileSystem for scheme: hdfs
        at
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2796)
        at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2810)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:98)
        at
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2853)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2835)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:387)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
        at
org.apache.accumulo.core.volume.VolumeImpl.<init>(VolumeImpl.java:45)
        at
org.apache.accumulo.core.volume.VolumeConfiguration.create(VolumeConfiguration.java:161)
        at
org.apache.accumulo.core.volume.VolumeConfiguration.getVolume(VolumeConfiguration.java:40)
        at
org.apache.accumulo.fate.zookeeper.ZooUtil.getInstanceIDFromHdfs(ZooUtil.java:591)
        at
org.apache.accumulo.fate.zookeeper.ZooUtil.getInstanceIDFromHdfs(ZooUtil.java:585)
        at org.apache.accumulo.server.util.ZooZap.main(ZooZap.java:76)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at
org.apache.accumulo.start.Main.lambda$execMainClass$1(Main.java:166)
        at java.lang.Thread.run(Thread.java:748)
2018-12-12 13:28:33,644 [start.Main] ERROR: Thread
'org.apache.accumulo.server.util.ZooZap' died.
java.lang.RuntimeException: Can't tell if Accumulo is initialized; can't
read instance id at hdfs://localhost:9000/accumulo/instance_id
        at
org.apache.accumulo.fate.zookeeper.ZooUtil.getInstanceIDFromHdfs(ZooUtil.java:613)
        at
org.apache.accumulo.fate.zookeeper.ZooUtil.getInstanceIDFromHdfs(ZooUtil.java:585)
        at org.apache.accumulo.server.util.ZooZap.main(ZooZap.java:76)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at
org.apache.accumulo.start.Main.lambda$execMainClass$1(Main.java:166)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: No FileSystem for scheme: hdfs
        at
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2796)
        at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2810)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:98)
        at
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2853)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2835)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:387)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
        at
org.apache.accumulo.core.volume.VolumeImpl.<init>(VolumeImpl.java:45)
        at
org.apache.accumulo.core.volume.VolumeConfiguration.create(VolumeConfiguration.java:161)
        at
org.apache.accumulo.core.volume.VolumeConfiguration.getVolume(VolumeConfiguration.java:40)
        at
org.apache.accumulo.fate.zookeeper.ZooUtil.getInstanceIDFromHdfs(ZooUtil.java:591)




--
Sent from: http://apache-accumulo.1065345.n5.nabble.com/Developers-f3.html
Reply | Threaded
Open this post in threaded view
|

Re: My new Accumulo 2 install can't find hdfs

Mike Walch-2
Hi Jeffrey,

It's strange that you were able to get the other services started but not
the master.  Is your accumulo-env.sh file set up correctly?  What version
of Hadoop are you using?  Hadoop 3 is required for Accumulo 2.0.  It looks
like you initialized your Accumulo instance (using 'accumulo init') but you
should check that a file named after your instance ID is printed when you
run the command below:

hadoop fs -ls hdfs://localhost:9000/accumulo/instance_id/

It looks like you are running Accumulo locally.  You might want to try
using uno: https://github.com/apache/fluo-uno

-Mike

On Thu, Dec 13, 2018 at 9:30 AM Jeffrey Zeiberg <[hidden email]> wrote:

> I know that NameNode and DataNode are running and  I know I have a file
> system from this:
>
> [jzeiberg@sr-linux2 conf]$ hadoop fs -ls hdfs://localhost:9000/
> Found 4 items
> -rw-rw-rw-   1 hdfs     supergroup          0 2018-12-05 10:57
> hdfs://localhost:9000/MY_HADOOP_CLUSTER
> drwx------   - jzeiberg supergroup          0 2018-12-12 13:54
> hdfs://localhost:9000/accumulo
> drwxr-xr-x   - jzeiberg supergroup          0 2018-12-05 14:14
> hdfs://localhost:9000/failures
> drwxr-xr-x   - jzeiberg supergroup          0 2018-12-05 14:14
> hdfs://localhost:9000/import
>
>
> But when I run either of these commands:
> accumulo-service master start
>     or
> accumulo-cluster restart
>
>
> I get this error for the master service (the others start fine):
>
> 018-12-12 13:28:33,449 [conf.SiteConfiguration] INFO : Found Accumulo
> configuration on classpath at
> /home/jzeiberg/accumulo/accumulo-2.0.0-SNAPSHOT/conf/accumulo.properties
> 2018-12-12 13:28:33,643 [zookeeper.ZooUtil] ERROR: Problem reading instance
> id out of hdfs at hdfs://localhost:9000/accumulo/instance_id
> java.io.IOException: No FileSystem for scheme: hdfs
>         at
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2796)
>         at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2810)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:98)
>         at
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2853)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2835)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:387)
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>         at
> org.apache.accumulo.core.volume.VolumeImpl.<init>(VolumeImpl.java:45)
>         at
>
> org.apache.accumulo.core.volume.VolumeConfiguration.create(VolumeConfiguration.java:161)
>         at
>
> org.apache.accumulo.core.volume.VolumeConfiguration.getVolume(VolumeConfiguration.java:40)
>         at
>
> org.apache.accumulo.fate.zookeeper.ZooUtil.getInstanceIDFromHdfs(ZooUtil.java:591)
>         at
>
> org.apache.accumulo.fate.zookeeper.ZooUtil.getInstanceIDFromHdfs(ZooUtil.java:585)
>         at org.apache.accumulo.server.util.ZooZap.main(ZooZap.java:76)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at
> org.apache.accumulo.start.Main.lambda$execMainClass$1(Main.java:166)
>         at java.lang.Thread.run(Thread.java:748)
> 2018-12-12 13:28:33,644 [start.Main] ERROR: Thread
> 'org.apache.accumulo.server.util.ZooZap' died.
> java.lang.RuntimeException: Can't tell if Accumulo is initialized; can't
> read instance id at hdfs://localhost:9000/accumulo/instance_id
>         at
>
> org.apache.accumulo.fate.zookeeper.ZooUtil.getInstanceIDFromHdfs(ZooUtil.java:613)
>         at
>
> org.apache.accumulo.fate.zookeeper.ZooUtil.getInstanceIDFromHdfs(ZooUtil.java:585)
>         at org.apache.accumulo.server.util.ZooZap.main(ZooZap.java:76)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at
> org.apache.accumulo.start.Main.lambda$execMainClass$1(Main.java:166)
>         at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: No FileSystem for scheme: hdfs
>         at
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2796)
>         at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2810)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:98)
>         at
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2853)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2835)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:387)
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>         at
> org.apache.accumulo.core.volume.VolumeImpl.<init>(VolumeImpl.java:45)
>         at
>
> org.apache.accumulo.core.volume.VolumeConfiguration.create(VolumeConfiguration.java:161)
>         at
>
> org.apache.accumulo.core.volume.VolumeConfiguration.getVolume(VolumeConfiguration.java:40)
>         at
>
> org.apache.accumulo.fate.zookeeper.ZooUtil.getInstanceIDFromHdfs(ZooUtil.java:591)
>
>
>
>
> --
> Sent from: http://apache-accumulo.1065345.n5.nabble.com/Developers-f3.html
>
Reply | Threaded
Open this post in threaded view
|

Re: My new Accumulo 2 install can't find hdfs

Jorge Machado
Check your configs, I would say you did not specify in the accumulo site .xml the correct hadoop location like hdfs:///hadoop...

Jorge Machado
[hidden email]


> Am 13.12.2018 um 17:13 schrieb Mike Walch <[hidden email]>:
>
> Hi Jeffrey,
>
> It's strange that you were able to get the other services started but not
> the master.  Is your accumulo-env.sh file set up correctly?  What version
> of Hadoop are you using?  Hadoop 3 is required for Accumulo 2.0.  It looks
> like you initialized your Accumulo instance (using 'accumulo init') but you
> should check that a file named after your instance ID is printed when you
> run the command below:
>
> hadoop fs -ls hdfs://localhost:9000/accumulo/instance_id/
>
> It looks like you are running Accumulo locally.  You might want to try
> using uno: https://github.com/apache/fluo-uno
>
> -Mike
>
>> On Thu, Dec 13, 2018 at 9:30 AM Jeffrey Zeiberg <[hidden email]> wrote:
>>
>> I know that NameNode and DataNode are running and  I know I have a file
>> system from this:
>>
>> [jzeiberg@sr-linux2 conf]$ hadoop fs -ls hdfs://localhost:9000/
>> Found 4 items
>> -rw-rw-rw-   1 hdfs     supergroup          0 2018-12-05 10:57
>> hdfs://localhost:9000/MY_HADOOP_CLUSTER
>> drwx------   - jzeiberg supergroup          0 2018-12-12 13:54
>> hdfs://localhost:9000/accumulo
>> drwxr-xr-x   - jzeiberg supergroup          0 2018-12-05 14:14
>> hdfs://localhost:9000/failures
>> drwxr-xr-x   - jzeiberg supergroup          0 2018-12-05 14:14
>> hdfs://localhost:9000/import
>>
>>
>> But when I run either of these commands:
>> accumulo-service master start
>>    or
>> accumulo-cluster restart
>>
>>
>> I get this error for the master service (the others start fine):
>>
>> 018-12-12 13:28:33,449 [conf.SiteConfiguration] INFO : Found Accumulo
>> configuration on classpath at
>> /home/jzeiberg/accumulo/accumulo-2.0.0-SNAPSHOT/conf/accumulo.properties
>> 2018-12-12 13:28:33,643 [zookeeper.ZooUtil] ERROR: Problem reading instance
>> id out of hdfs at hdfs://localhost:9000/accumulo/instance_id
>> java.io.IOException: No FileSystem for scheme: hdfs
>>        at
>> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2796)
>>        at
>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2810)
>>        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:98)
>>        at
>> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2853)
>>        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2835)
>>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:387)
>>        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>>        at
>> org.apache.accumulo.core.volume.VolumeImpl.<init>(VolumeImpl.java:45)
>>        at
>>
>> org.apache.accumulo.core.volume.VolumeConfiguration.create(VolumeConfiguration.java:161)
>>        at
>>
>> org.apache.accumulo.core.volume.VolumeConfiguration.getVolume(VolumeConfiguration.java:40)
>>        at
>>
>> org.apache.accumulo.fate.zookeeper.ZooUtil.getInstanceIDFromHdfs(ZooUtil.java:591)
>>        at
>>
>> org.apache.accumulo.fate.zookeeper.ZooUtil.getInstanceIDFromHdfs(ZooUtil.java:585)
>>        at org.apache.accumulo.server.util.ZooZap.main(ZooZap.java:76)
>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>        at
>>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>        at
>>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>        at java.lang.reflect.Method.invoke(Method.java:498)
>>        at
>> org.apache.accumulo.start.Main.lambda$execMainClass$1(Main.java:166)
>>        at java.lang.Thread.run(Thread.java:748)
>> 2018-12-12 13:28:33,644 [start.Main] ERROR: Thread
>> 'org.apache.accumulo.server.util.ZooZap' died.
>> java.lang.RuntimeException: Can't tell if Accumulo is initialized; can't
>> read instance id at hdfs://localhost:9000/accumulo/instance_id
>>        at
>>
>> org.apache.accumulo.fate.zookeeper.ZooUtil.getInstanceIDFromHdfs(ZooUtil.java:613)
>>        at
>>
>> org.apache.accumulo.fate.zookeeper.ZooUtil.getInstanceIDFromHdfs(ZooUtil.java:585)
>>        at org.apache.accumulo.server.util.ZooZap.main(ZooZap.java:76)
>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>        at
>>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>        at
>>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>        at java.lang.reflect.Method.invoke(Method.java:498)
>>        at
>> org.apache.accumulo.start.Main.lambda$execMainClass$1(Main.java:166)
>>        at java.lang.Thread.run(Thread.java:748)
>> Caused by: java.io.IOException: No FileSystem for scheme: hdfs
>>        at
>> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2796)
>>        at
>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2810)
>>        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:98)
>>        at
>> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2853)
>>        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2835)
>>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:387)
>>        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>>        at
>> org.apache.accumulo.core.volume.VolumeImpl.<init>(VolumeImpl.java:45)
>>        at
>>
>> org.apache.accumulo.core.volume.VolumeConfiguration.create(VolumeConfiguration.java:161)
>>        at
>>
>> org.apache.accumulo.core.volume.VolumeConfiguration.getVolume(VolumeConfiguration.java:40)
>>        at
>>
>> org.apache.accumulo.fate.zookeeper.ZooUtil.getInstanceIDFromHdfs(ZooUtil.java:591)
>>
>>
>>
>>
>> --
>> Sent from: http://apache-accumulo.1065345.n5.nabble.com/Developers-f3.html
>>