NyARToolkit for BeagleBoard-xm (2014-05-05 02:18 by johanna2 #72908)
Hello,
I'm working with the BeagleBoard-xm, I want to execute NyARToolkit, I already enable the USB camera, can you send me please the .apk version of NyARToolkit for Beagleboard?, or can you tell me how can I start to make an app using an USB camera.
Thank you so much for your help.
Re: NyARToolkit for BeagleBoard-xm (2014-05-05 04:18 by noritsuna #72910)
UVCCameraPreview#run method gets a Bitmap Image from UVC Camera.
So if you merge this code and the CameraPreview class of NyARToolkit, you can use USB Camera on BeagleBoard-xM.
Regards,
Noritsuna
> Hello,
>
> I'm working with the BeagleBoard-xm, I want to execute NyARToolkit, I already enable the USB camera, can you send me please the .apk version of NyARToolkit for Beagleboard?, or can you tell me how can I start to make an app using an USB camera.
>
> Thank you so much for your help.
Re: NyARToolkit for BeagleBoard-xm (2014-05-13 23:27 by johanna2 #73002)
Hello,
Thank you so much for your help, I already download the sample UVC camera, I'm trying to run it on Android Studio but I got an error "program sh not found in PATH". On the other hand, I opened the project of NyARToolkit, and I guess I have to create a class for the webcam in jp.androidgroup.nyartoolkit.hardware, but I'm not sure if I'm right.
Do I have to do something else?, thanks I don't have much experience creating apps on Android.
Re: NyARToolkit for BeagleBoard-xm (2014-05-20 00:48 by johanna2 #73078)
Hello,
I'm still working on the BeagleBoard. I downloaded the NyARToolkitAndroid-2.5.2. When I opened the project I found the UVCCamera.java class, I prove it but the USB camera did not work. So I start to modifying this class like this:
// The parameter strings to communicate with camera driver.
public static final String PARM_PREVIEW_SIZE = "preview-size";
public static final String PARM_PICTURE_SIZE = "picture-size";
public static final String PARM_JPEG_QUALITY = "jpeg-quality";
public static final String PARM_ROTATION = "rotation";
public static final String PARM_GPS_LATITUDE = "gps-latitude";
public static final String PARM_GPS_LONGITUDE = "gps-longitude";
public static final String PARM_GPS_ALTITUDE = "gps-altitude";
public static final String PARM_GPS_TIMESTAMP = "gps-timestamp";
public static final String SUPPORTED_ZOOM = "zoom-values";
public static final String SUPPORTED_PICTURE_SIZE = "picture-size-values";
private SharedPreferences mPreferences;
public static final int IDLE = 1;
public static final int SNAPSHOT_IN_PROGRESS = 2;
public static final int SNAPSHOT_COMPLETED = 3;
private static final int FOCUS_NOT_STARTED = 0;
private static final int FOCUSING = 1;
private static final int FOCUSING_SNAP_ON_FINISH = 2;
private static final int FOCUS_SUCCESS = 3;
private static final int FOCUS_FAIL = 4;
private int mFocusState = FOCUS_NOT_STARTED;
private LocationManager mLocationManager = null;
private final OneShotPreviewCallback mOneShotPreviewCallback = new OneShotPreviewCallback();
private final AutoFocusCallback mAutoFocusCallback = new AutoFocusCallback();
private String mFocusMode;
private Handler mHandler = null;
private MediaPlayer mVoiceSound = null;
//** Modificación de la clase **//
private Bitmap bmp=null;
private static final boolean DEBUG = true;
// This definition also exists in ImageProc.h.
// Webcam must support the resolution 640x480 with YUYV format.
static final int IMG_WIDTH=640;
static final int IMG_HEIGHT=480;
// The following variables are used to draw camera images.
private int winWidth=0;
private int winHeight=0;
private Rect rect;
private int dw, dh;
private float rate;
// /dev/videox (x=cameraId+cameraBase) is used.
// In some omap devices, system uses /dev/video[0-3],
// so users must use /dev/video[4-].
// In such a case, try cameraId=0 and cameraBase=4
private int cameraId=0;
private int cameraBase=0;
// JNI functions
public native int prepareCamera(int videoid);
public native int prepareCameraWithBase(int videoid, int camerabase);
public native void processCamera();
public native void stopCamera();
public native void pixeltobmp(Bitmap bitmap);
static {
System.loadLibrary("ImageProc");
}
private LocationListener [] mLocationListeners = new LocationListener[] {
new LocationListener(LocationManager.GPS_PROVIDER),
new LocationListener(LocationManager.NETWORK_PROVIDER)
};
public UVCCamera(NyARToolkitAndroidActivity mMainActivity, SurfaceView mSurfaceView) {
Log.d(TAG, "instance");
public void setPreviewCallback(PreviewCallback callback) {
mJpegPreviewCallback = callback;
}
public void setParameters(Parameters params) {
mCameraDevice.setParameters(params);
}
public Parameters getParameters() {
return mCameraDevice.getParameters();
}
public void resetPreviewSize(int width, int height) {
}
/**
* This Handler is used to post message back onto the main thread of the
* application
*/
public void handleMessage(Message msg) {
switch (msg.what) {
case NyARToolkitAndroidActivity.RESTART_PREVIEW: {
if (mStatus == SNAPSHOT_IN_PROGRESS) {
// We are still in the processing of taking the picture, wait.
// This is is strange. Why are we polling?
// TODO remove polling
Log.d(TAG, "sendEmptyMessageDelayed(RESTART_PREVIEW)");
mHandler.sendEmptyMessageDelayed(NyARToolkitAndroidActivity.RESTART_PREVIEW, 100);
}
else
restartPreview();
break;
}
case NyARToolkitAndroidActivity.SHOW_LOADING: {
stopPreview();
break;
}
case NyARToolkitAndroidActivity.HIDE_LOADING: {
startPreview();
break;
}
}
}
private int getHeight() {
// TODO Auto-generated method stub
return 0;
}
private int getWidth() {
// TODO Auto-generated method stub
return 0;
}
public void onSnap() {
// If we are already in the middle of taking a snapshot then ignore.
if (mPausing || mStatus == SNAPSHOT_IN_PROGRESS) {
return;
}
mStatus = SNAPSHOT_IN_PROGRESS;
mImageCapture.initiate();
}
}
@Override
public void onStart() {
Log.d(TAG, "onStart");
}
@Override
public void onDestroy() {
Log.d(TAG, "onDestroy");
}
@Override
public void onResume() {
Log.d(TAG, "onResume");
mPausing = false;
mImageCapture = new ImageCapture();
}
@Override
public void onStop() {
Log.d(TAG, "onStop");
}
@Override
public void onPause() {
Log.d(TAG, "onPause");
public void restartPreview() {
Log.d(TAG, "restartPreview");
// make sure the surfaceview fills the whole screen when previewing
mSurfaceView.requestLayout();
mSurfaceView.invalidate();
startPreview();
}
private void stopReceivingLocationUpdates() {
if (mLocationManager != null) {
for (int i = 0; i < mLocationListeners.length; i++) {
try {
mLocationManager.removeUpdates(mLocationListeners[i]);
} catch (Exception ex) {
Log.d(TAG, "fail to remove location listners, ignore", ex);
}
}
}
}
public Location getCurrentLocation() {
// go in worst to best order
for (int i = 0; i < mLocationListeners.length; i++) {
Location l = mLocationListeners[i].current();
if (l != null) return l;
}
return null;
}
Re: NyARToolkit for BeagleBoard-xm (2014-05-24 03:24 by johanna2 #73143)
[メッセージ #73079 への返信]
> Hi,
>
> What's the error message?
> And Why do you use "private android.hardware.Camera mCameraDevice"?
>
> Regards,
> Noritsuna
>
Hi,
Thanks for your answer, I want to explain better the situation. I opened NyARToolkitAndroid-2.5.2 project, there is a package with the following classes:
I noticed that CameraIF.java class is an interface for cameras.
In other package --------> jp.androidgroup.nyartoolkit, the class NyARToolkitAndroidActivity.java is the main class that choose the camera class in this line:
// init Camera.
****** In case we are using UVCCamera class for example********
*************************************************************************************
When I excecute the app, it says unfortunately the app has to stop, the problem is in this line of the class:
NyARToolkitAndroidActivity.java-------> Class
Line 198---->mCameraDevice = new UVCCamera(this, mSurfaceView);
Do you think is better to modify another class of the NyARToolkit?, for example the HT03ACamera class, maybe I have unnecessary information in UVCCameraclass, I want to make clear that UVCCamera class IS NOT the class you sent me is a class included in the original package, that I'm modifying. Can you explain me with more details how to merge the code of WebCamera with the NyARToolkit. Thanks a lot.
Re: NyARToolkit for BeagleBoard-xm (2014-05-24 03:28 by johanna2 #73144)
[メッセージ #73143 への返信]
> [メッセージ #73079 への返信]
> > Hi,
> >
> > What's the error message?
> > And Why do you use "private android.hardware.Camera mCameraDevice"?
> >
> > Regards,
> > Noritsuna
> >
> Hi,
>
> Thanks for your answer, I want to explain better the situation. I opened NyARToolkitAndroid-2.5.2 project, there is a package with the following classes:
>
> jp.androidgroup.nyartoolkit.hardware -------> Package
>
> *CameraIF.java
> *Dev1Camera.java
> *HT03ACamera.java
> *N1Camera.java
> *SocketCamera.java
> *StaticCamera.java
> *UVCCamera.java
>
> I noticed that CameraIF.java class is an interface for cameras.
>
> In other package --------> jp.androidgroup.nyartoolkit, the class NyARToolkitAndroidActivity.java is the main class that choose the camera class in this line:
>
> // init Camera.
>
> ****** In case we are using UVCCamera class for example********
>
> else if(getString(R.string.camera_name).equals("jp.androidgroup.nyartoolkit.hardware.UVCCamera")) {
>
> isUseSerface = true;
> if (mTranslucentBackground) {
> mGLSurfaceView = new GLSurfaceView(this);
> mGLSurfaceView.setEGLConfigChooser(8, 8, 8, 8, 16, 0);
> mGLSurfaceView.setRenderer(mRenderer);
> mGLSurfaceView.getHolder().setFormat(PixelFormat.TRANSLUCENT);
> SurfaceView mSurfaceView = new SurfaceView(this);
> mCameraDevice = new UVCCamera(this, mSurfaceView);
> mPreferences = PreferenceManager.getDefaultSharedPreferences(this);
> setContentView(mGLSurfaceView);
> addContentView(mSurfaceView, new LayoutParams(LayoutParams.FILL_PARENT, LayoutParams.FILL_PARENT));
> } else {
> setContentView(R.layout.uvccamera);
> SurfaceView mSurfaceView = (SurfaceView) findViewById(R.id.UVC_camera_preview);
> mCameraDevice = new UVCCamera(this, mSurfaceView);
> mPreferences = PreferenceManager.getDefaultSharedPreferences(this);
> mGLSurfaceView = (GLSurfaceView) findViewById(R.id.UVC_GL_view);
> mGLSurfaceView.setRenderer(mRenderer);
>
> ***********************************************************************
>
> So, I decided to modify this class UVCCamera.java that was included originally in this package of the NyARToolkit, in this class they are using:
>
> private android.hardware.Camera mCameraDevice
>
> I added the following code to this class UVCCamera based on the example UVCCamerapreview you sent me:
>
> *************Here I added run method*****************
>
> But I'm not sure about mcameradevice, I don't know how to merge this run method in this case, but I dit it like this:
>
> public class ImageCapture {
>
> private boolean mCancel = false;
>
> /*
> * Initiate the capture of an image.
> */
> public void initiate() {
> if (mCameraDevice == null) {
> return;
> }
>
> mCancel = true;
>
> capture();
> }
>
> private void capture() {
>
> while (true && cameraExists) {
> //obtaining display area to draw a large image
> if(winWidth==0){
> winWidth=this.getWidth();
> winHeight=this.getHeight();
>
> if(winWidth*3/4<=winHeight){
> dw = 0;
> dh = (winHeight-winWidth*3/4)/2;
> rate = ((float)winWidth)/IMG_WIDTH;
> rect = new Rect(dw,dh,dw+winWidth-1,dh+winWidth*3/4-1);
> }else{
> dw = (winWidth-winHeight*4/3)/2;
> dh = 0;
> rate = ((float)winHeight)/IMG_HEIGHT;
> rect = new Rect(dw,dh,dw+winHeight*4/3 -1,dh+winHeight-1);
> }
> }
>
> // obtaining a camera image (pixel data are stored in an array in JNI).
> processCamera();
> // camera image to bmp
> pixeltobmp(bmp);
>
> Canvas canvas = getHolder().lockCanvas();
> if (canvas != null)
> {
> // draw camera bmp on canvas
> canvas.drawBitmap(bmp,null,rect,null);
>
> getHolder().unlockCanvasAndPost(canvas);
> }
>
> if(shouldStop){
> shouldStop = false;
> break;
> }
> }
> mCameraDevice.setParameters(mParameters);
> // mCameraDevice.takePicture(null, null, new JpegPictureCallback(loc));
> mPreviewing = false;
> }
>
> ***************************Then I modified******************************************************
>
>
>
> public void surfaceCreated(SurfaceHolder holder) {
> Log.d(TAG, "surfaceCreated");
> if(DEBUG) Log.d(TAG, "surfaceCreated");
> if(bmp==null){
> bmp = Bitmap.createBitmap(mViewFinderWidth, mViewFinderHeight, Bitmap.Config.ARGB_8888);
> }
> // /dev/videox (x=cameraId + cameraBase) is used
> int ret = prepareCameraWithBase(cameraId, cameraBase);
>
> if(ret!=-1) cameraExists = true;
>
> mainLoop = new Thread((Runnable) this);
> mainLoop.start();
>
> }
>
> *********************************And finally I modified this**********************************************************
>
> public void surfaceDestroyed(SurfaceHolder holder) {
> Log.d(TAG, "surfaceDestroyed");
>
> if(DEBUG) Log.d(TAG, "surfaceDestroyed");
> if(cameraExists){
> shouldStop = true;
> while(shouldStop){
> try{
> Thread.sleep(100); // wait for thread stopping
> }catch(Exception e){}
> }
> }
> stopCamera();
> mSurfaceHolder = null;
> }
>
> *************************************************************************************
> When I excecute the app, it says unfortunately the app has to stop, the problem is in this line of the class:
>
>
> NyARToolkitAndroidActivity.java-------> Class
>
> Line 198---->mCameraDevice = new UVCCamera(this, mSurfaceView);
>
> In this class mCameraDevice is defined as:
>
> private CameraIF mCameraDevice;
>
>
> *****************************Conclusion**************************************************
>
> Do you think is better to modify another class of the NyARToolkit?, for example the HT03ACamera class, maybe I have unnecessary information in UVCCameraclass, I want to make clear that UVCCamera class IS NOT the class you sent me, is a class included in the original package and I'm modifying it. Can you explain me with more details how to merge the code of WebCamera with the NyARToolkit in this case ?. Thanks a lot.
>
// /dev/videox (x=cameraId+cameraBase) is used.
// In some omap devices, system uses /dev/video[0-3],
// so users must use /dev/video[4-].
// In such a case, try cameraId=0 and cameraBase=4
private int cameraId=0;
private int cameraBase=0;
// This definition also exists in ImageProc.h.
// Webcam must support the resolution 640x480 with YUYV format.
static final int IMG_WIDTH=640;
static final int IMG_HEIGHT=480;
// The following variables are used to draw camera images.
private int winWidth=0;
private int winHeight=0;
private Rect rect;
private int dw, dh;
private float rate;
// JNI functions
public native int prepareCamera(int videoid);
public native int prepareCameraWithBase(int videoid, int camerabase);
public native void processCamera();
public native void stopCamera();
public native void pixeltobmp(Bitmap bitmap);
static {
System.loadLibrary("ImageProc");
}
@Override
public void run() {
while (true && cameraExists) {
//obtaining display area to draw a large image
if(winWidth==0){
winWidth=this.getWidth();
winHeight=this.getHeight();
if(winWidth*3/4<=winHeight){
dw = 0;
dh = (winHeight-winWidth*3/4)/2;
rate = ((float)winWidth)/IMG_WIDTH;
rect = new Rect(dw,dh,dw+winWidth-1,dh+winWidth*3/4-1);
}else{
dw = (winWidth-winHeight*4/3)/2;
dh = 0;
rate = ((float)winHeight)/IMG_HEIGHT;
rect = new Rect(dw,dh,dw+winHeight*4/3 -1,dh+winHeight-1);
}
}
// obtaining a camera image (pixel data are stored in an array in JNI).
processCamera();
// camera image to bmp
pixeltobmp(bmp);
Canvas canvas = getHolder().lockCanvas();
if (canvas != null)
{
// draw camera bmp on canvas
canvas.drawBitmap(bmp,null,rect,null);
getHolder().unlockCanvasAndPost(canvas);
}
if(shouldStop){
shouldStop = false;
break;
}
}
}
@Override
public void surfaceCreated(SurfaceHolder holder) {
if(DEBUG) Log.d(TAG, "surfaceCreated");
if(bmp==null){
bmp = Bitmap.createBitmap(IMG_WIDTH, IMG_HEIGHT, Bitmap.Config.ARGB_8888);
}
// /dev/videox (x=cameraId + cameraBase) is used
int ret = prepareCameraWithBase(cameraId, cameraBase);
if(ret!=-1) cameraExists = true;
mainLoop = new Thread(this);
mainLoop.start();
}
@Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
if(DEBUG) Log.d(TAG, "surfaceChanged");
}
@Override
public void surfaceDestroyed(SurfaceHolder holder) {
if(DEBUG) Log.d(TAG, "surfaceDestroyed");
if(cameraExists){
shouldStop = true;
while(shouldStop){
try{
Thread.sleep(100); // wait for thread stopping
}catch(Exception e){}
}
}
stopCamera();
}
@Override
public void setPreviewCallback(PreviewCallback cb) {
// TODO Auto-generated method stub
}
@Override
public void setParameters(Parameters params) {
// TODO Auto-generated method stub
}
@Override
public Parameters getParameters() {
// TODO Auto-generated method stub
return null;
}
@Override
public void resetPreviewSize(int width, int height) {
// TODO Auto-generated method stub
}
@Override
public void onStart() {
// TODO Auto-generated method stub
}
@Override
public void onResume() {
// TODO Auto-generated method stub
}
@Override
public void onStop() {
// TODO Auto-generated method stub
}
@Override
public void onPause() {
// TODO Auto-generated method stub
}
@Override
public void onDestroy() {
// TODO Auto-generated method stub
}
@Override
public void handleMessage(Message msg) {
// TODO Auto-generated method stub
}
}
***************************************************************************************
Here, I just copy the UVCCamera class you sent me and I extended "SurfaceView", then I Implemented CameraIF, SurfaceHolder.Callback and Runnable.
I still don't know how to implement Cameraweb.java class methods or how to merge them with CameraIF methods which are:
******************************CameraIF.java class ******************************
/**
* It is an interface for cameras.
*
* @author noritsuna
*
*/
public interface CameraIF {
public void setPreviewCallback(PreviewCallback cb);
public void setParameters(Parameters params);
public Parameters getParameters();
public void resetPreviewSize(int width, int height);
public void onStart();
public void onResume();
public void onStop();
public void onPause();
public void onDestroy();