add flutter to android

根据 https://flutter.dev/docs/development/add-to-app 中的步骤,在现有的应用中添加flutter模块

准备工作:

Android studio要升级到3.6,flutter插件要是42以上的

  1. File-new module 选择flutter module,这样就可以生成一个flutter模块

  2. 向应用中添加并启动flutter页面

    1. 先初始化flutter在application中
      FlutterMain.startInitialization(this);

    2. 启动默认页面

      startActivity(FlutterActivity.createDefaultIntent(activity!!))

    3. 添加依赖 implementation 'android.arch.lifecycle:common-java8:1.1.1'

  3. 错误记录

    1. 错误: 程序包android.support.annotation不存在

      修正:将对 android.support.annotation.NonNull; 的改为Androidx androidx.fragment.app.Fragment;

      androidx.annotation.NonNull;
      import androidx.lifecycle.Lifecycle;
      import androidx.lifecycle.LifecycleObserver;
      import androidx.lifecycle.OnLifecycleEvent;

    2. 错误:未初始化flutter

      1
      Unable to start activity ComponentInfo{com.magefitness.app/io.flutter.embedding.android.FlutterActivity}: java.lang.IllegalStateException: ensureInitializationComplete must be called after startInitialization

      修正flutter未初始化,在application中FlutterMain.startInitialization(this);

  1. 未添加 implementation 'android.arch.lifecycle:common-java8:1.1.1'

    1
    Failed resolution of: Lio/flutter/embedding/engine/FlutterEngineAndroidLifecycle$1;

Hello World

Welcome to Hexo! This is your very first post. Check documentation for more info. If you get any problems when using Hexo, you can find the answer in troubleshooting or you can ask me on GitHub.

Quick Start

Create a new post

1
$ hexo new "My New Post"

More info: Writing

Run server

1
$ hexo server

More info: Server

Generate static files

1
$ hexo generate

More info: Generating

Deploy to remote sites

1
$ hexo deploy

More info: Deployment

TensorFlow Android walkthrough

【链接】tensorflow/tensorflow
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android

用pip 安装tensor flow

1
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.0.0-py3-none-any.whl
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
pip3 install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.6.0-py3-none-any.whl

Downloading https://storage.googleapis.com/tensorflow/mac/tensorflow-0.6.0-py3-none-any.whl (10.2MB)
100% |████████████████████████████████| 10.3MB 1.9MB/s
Collecting numpy>=1.8.2 (from tensorflow==0.6.0)
Downloading https://files.pythonhosted.org/packages/8e/75/7a8b7e3c073562563473f2a61bd53e75d0a1f5e2047e576ee61d44113c22/numpy-1.14.3-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (4.7MB)
100% |████████████████████████████████| 4.7MB 832kB/s
Collecting protobuf==3.0.0a3 (from tensorflow==0.6.0)
Downloading https://files.pythonhosted.org/packages/d7/92/34c5810fa05e98082d141048110db97d2f98d318fa96f8202bf146ab79de/protobuf-3.0.0a3.tar.gz (88kB)
100% |████████████████████████████████| 92kB 18.6MB/s
Requirement not upgraded as not directly required: wheel>=0.26 in ./venv/lib/python3.6/site-packages (from tensorflow==0.6.0) (0.31.1)
Collecting six>=1.10.0 (from tensorflow==0.6.0)
Downloading https://files.pythonhosted.org/packages/67/4b/141a581104b1f6397bfa78ac9d43d8ad29a7ca43ea90a2d863fe3056e86a/six-1.11.0-py2.py3-none-any.whl
Requirement not upgraded as not directly required: setuptools in ./venv/lib/python3.6/site-packages (from protobuf==3.0.0a3->tensorflow==0.6.0) (39.1.0)
Building wheels for collected packages: protobuf
Running setup.py bdist_wheel for protobuf ... done
Stored in directory: /Users/zowee-laisc/Library/Caches/pip/wheels/07/0a/98/ca8fbec7368a85849700304bf0cf40d2d8e183f9a5dd136795
Successfully built protobuf
Installing collected packages: numpy, protobuf, six, tensorflow
Successfully installed numpy-1.14.3 protobuf-3.0.0a3 six-1.11.0 tensorflow-0.6.0
(venv) zowee-laiscdeMacBook-Pro:tensorflow zowee-laisc$

就这么简单,然后测试环境ok

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
(venv) zowee-laiscdeMacBook-Pro:tensorflow zowee-laisc$ python
Python 3.6.4 (v3.6.4:d48ecebad5, Dec 18 2017, 21:07:28)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> hello = tf.constant('hello tensorflow')
>>> sess = tf.Session()
I tensorflow/core/common_runtime/local_device.cc:40] Local device intra op parallelism threads: 4
I tensorflow/core/common_runtime/direct_session.cc:58] Direct session inter op parallelism threads: 4
>>> print(sess.run(hello))
b'hello tensorflow'
>>> a = tf.constant(10)
>>> b = tf.constant(32)
>>> print(sess.run(a+b))
42
>>> exit();

python3运行起来有些问题,后面把环境换成python2.7了,

Pycharm 配置python版本为 2.7,然后激活虚拟环境即可。

android 的环境

下载GitHub 源码

1
git clone --recurse-submodules https://github.com/tensorflow/tensorflow.git

将example下的Android目录作为项目根目录,用android studio直接打开就可以了

将buildsystem改为 cmake,编译。

Android demo walkthrough

归类识别classifier

ClassifierActivity 用于识别分类的,继承了基础类CameraActivity

1
public class ClassifierActivity extends CameraActivity implements OnImageAvailableListener {

其中CameraActivity封装了一些camera的操作

先查看下CameraActivity的代码

在onCreate的时候

1
2
3
4
5
6
7
8
9
10
11
12
13
14
@Override
protected void onCreate(final Bundle savedInstanceState) {
LOGGER.d("onCreate " + this);
super.onCreate(null);
//设置屏幕常亮,只要给该window对用户可见的时候,保持屏幕亮屏
getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);

setContentView(R.layout.activity_camera);
if (hasPermission()) {
setFragment();
} else {
requestPermission();
}
}

其中setFragment()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
protected void setFragment() {
String cameraId = chooseCamera();//选择合适的camera
if (cameraId == null) {
Toast.makeText(this, "No Camera Detected", Toast.LENGTH_SHORT).show();
finish();
}

Fragment fragment;
if (useCamera2API) {//true 初始化fragment
CameraConnectionFragment camera2Fragment =
CameraConnectionFragment.newInstance(
new CameraConnectionFragment.ConnectionCallback() {
@Override
public void onPreviewSizeChosen(final Size size, final int rotation) {
Log.i("linlian","useCamera2API onPreviewSizeChosen=");
previewHeight = size.getHeight();
previewWidth = size.getWidth();
CameraActivity.this.onPreviewSizeChosen(size, rotation);
}
},
this,
getLayoutId(),
getDesiredPreviewFrameSize());

camera2Fragment.setCamera(cameraId);
fragment = camera2Fragment;
} else {
fragment =
new LegacyCameraConnectionFragment(this, getLayoutId(), getDesiredPreviewFrameSize());
}

//将合适的fragment添加到Activity中
getFragmentManager()
.beginTransaction()
.replace(R.id.container, fragment)
.commit();
}

其中CameraConnectionFragment,对应布局为protected int getLayoutId() { return R.layout.camera_connection_fragment;}

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent">

<org.tensorflow.demo.AutoFitTextureView
android:id="@+id/texture"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true" />

<org.tensorflow.demo.RecognitionScoreView
android:id="@+id/results"
android:layout_width="match_parent"
android:layout_height="112dp"
android:layout_alignParentTop="true" /> 用于显示类别 在顶部

<org.tensorflow.demo.OverlayView
android:id="@+id/debug_overlay"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_alignParentBottom="true" />

</RelativeLayout>

RecognitionScoreView 自定义控件用于显示识别结果canvas.drawText(recog.getTitle() + ": " + recog.getConfidence(), x, y, fgPaint);

OverlayView 简单的提供回调的view,例如调试的时候,可用于显示调试中的图像等。

AutoFitTextureView 继承 TextureView

TextureView可用于显示流内容,可以是视频或者OpenGL场景。

surfaceview 窗口的工作方式是创建一个置于应用窗口之后的新窗口,效率高,因为刷新新窗口的时候,不需要重新绘制应用程序的窗口,但是surfaceview不在应用窗口上,所以不能使用view.setAlpha()之类的变换。也很难放在list view或者scrollview中。

Textureview在Android4.0引入,来解决上述问题,textureview必须在硬件加速器中开启。

AutoFitTextureView 在Textureview的基础上添加的长宽适应的功能。

Textureview主要用法设置 TextureView.SurfaceTextureListener

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
/**
* {@link android.view.TextureView.SurfaceTextureListener} handles several lifecycle events on a
* {@link TextureView}.
*/
private final TextureView.SurfaceTextureListener surfaceTextureListener =
new TextureView.SurfaceTextureListener() {
@Override
public void onSurfaceTextureAvailable(//初始化
final SurfaceTexture texture, final int width, final int height) {
openCamera(width, height);
}

@Override
public void onSurfaceTextureSizeChanged(//size 变化时候
final SurfaceTexture texture, final int width, final int height) {
configureTransform(width, height);
}

@Override
public boolean onSurfaceTextureDestroyed(final SurfaceTexture texture) {
return true;
}

@Override
public void onSurfaceTextureUpdated(final SurfaceTexture texture) {}
};

打开摄像头,配置最合适的preview size

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
/**
* Opens the camera specified by {@link CameraConnectionFragment#cameraId}.
*/
private void openCamera(final int width, final int height) {
setUpCameraOutputs();//设置预览大小
configureTransform(width, height);//旋转偏移量
final Activity activity = getActivity();
final CameraManager manager = (CameraManager) activity.getSystemService(Context.CAMERA_SERVICE);
try {
if (!cameraOpenCloseLock.tryAcquire(2500, TimeUnit.MILLISECONDS)) {
throw new RuntimeException("Time out waiting to lock camera opening.");
}
manager.openCamera(cameraId, stateCallback, backgroundHandler);//打开camera
} catch (final CameraAccessException e) {
LOGGER.e(e, "Exception!");
} catch (final InterruptedException e) {
throw new RuntimeException("Interrupted while trying to lock camera opening.", e);
}
}

stateCallback

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
/**
* {@link android.hardware.camera2.CameraDevice.StateCallback}
* is called when {@link CameraDevice} changes its state.
*/
private final CameraDevice.StateCallback stateCallback =
new CameraDevice.StateCallback() {
@Override
public void onOpened(final CameraDevice cd) {
// This method is called when the camera is opened. We start camera preview here.
cameraOpenCloseLock.release();
cameraDevice = cd;
createCameraPreviewSession();
}

@Override
public void onDisconnected(final CameraDevice cd) {
cameraOpenCloseLock.release();
cd.close();
cameraDevice = null;
}

@Override
public void onError(final CameraDevice cd, final int error) {
cameraOpenCloseLock.release();
cd.close();
cameraDevice = null;
final Activity activity = getActivity();
if (null != activity) {
activity.finish();
}
}
};

backgroundThread

1
2
3
4
5
6
7
8
/**
* Starts a background thread and its {@link Handler}.
*/
private void startBackgroundThread() {//在onresume的时候被调用
backgroundThread = new HandlerThread("ImageListener");
backgroundThread.start();
backgroundHandler = new Handler(backgroundThread.getLooper());
}

其中 在camera onOpened() 的时候调用 createCameraPreviewSession

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
/**
* Creates a new {@link CameraCaptureSession} for camera preview.
*/
private void createCameraPreviewSession() {
try {
final SurfaceTexture texture = textureView.getSurfaceTexture();
assert texture != null;

// We configure the size of default buffer to be the size of camera preview we want.
texture.setDefaultBufferSize(previewSize.getWidth(), previewSize.getHeight());

// This is the output Surface we need to start preview.
final Surface surface = new Surface(texture);

// We set up a CaptureRequest.Builder with the output Surface.
previewRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
previewRequestBuilder.addTarget(surface);

LOGGER.i("Opening camera preview: " + previewSize.getWidth() + "x" + previewSize.getHeight());

// Create the reader for the preview frames.
previewReader =
ImageReader.newInstance(
previewSize.getWidth(), previewSize.getHeight(), ImageFormat.YUV_420_888, 2);

previewReader.setOnImageAvailableListener(imageListener, backgroundHandler);
previewRequestBuilder.addTarget(previewReader.getSurface());

// Here, we create a CameraCaptureSession for camera preview.
cameraDevice.createCaptureSession(
Arrays.asList(surface, previewReader.getSurface()),
new CameraCaptureSession.StateCallback() {

@Override
public void onConfigured(final CameraCaptureSession cameraCaptureSession) {
// The camera is already closed
if (null == cameraDevice) {
return;
}

// When the session is ready, we start displaying the preview.
captureSession = cameraCaptureSession;
try {
// Auto focus should be continuous for camera preview.
previewRequestBuilder.set(
CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
previewRequestBuilder.set(
CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);

// Finally, we start displaying the camera preview.
previewRequest = previewRequestBuilder.build();
captureSession.setRepeatingRequest(
previewRequest, captureCallback, backgroundHandler);
} catch (final CameraAccessException e) {
LOGGER.e(e, "Exception!");
}
}

@Override
public void onConfigureFailed(final CameraCaptureSession cameraCaptureSession) {
showToast("Failed");
}
},
null);
} catch (final CameraAccessException e) {
LOGGER.e(e, "Exception!");
}
}

以上只是一些camera的操作和预览大小的设置

实际与识别相关的

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
/**
* Callback for android.hardware.Camera API
*/
@Override
public void onPreviewFrame(final byte[] bytes, final Camera camera) {
Log.i("linlian","CameraActivity.onPreviewFrame()");
if (isProcessingFrame) {
LOGGER.w("Dropping frame!");//如果正在处理,则丢掉这一frame
return;
}

try {
// Initialize the storage bitmaps once when the resolution is known.
if (rgbBytes == null) {
Camera.Size previewSize = camera.getParameters().getPreviewSize();
previewHeight = previewSize.height;
previewWidth = previewSize.width;
rgbBytes = new int[previewWidth * previewHeight];//初始化 rgbBytes
onPreviewSizeChosen(new Size(previewSize.width, previewSize.height), 90);
}
} catch (final Exception e) {
LOGGER.e(e, "Exception!");
return;
}

isProcessingFrame = true;
lastPreviewFrame = bytes;
yuvBytes[0] = bytes;
yRowStride = previewWidth;

imageConverter =
new Runnable() {
@Override
public void run() {//最终调用native方法实现转化
ImageUtils.convertYUV420SPToARGB8888(bytes, previewWidth, previewHeight, rgbBytes);
}
};

postInferenceCallback =
new Runnable() {
@Override
public void run() {
camera.addCallbackBuffer(bytes);
isProcessingFrame = false;
}
};
processImage();//在子类实现图片处理
}

ClassifierActivity

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
@Override
protected void processImage() {
rgbFrameBitmap.setPixels(getRgbBytes(), 0, previewWidth, 0, 0, previewWidth, previewHeight);
final Canvas canvas = new Canvas(croppedBitmap);

canvas.drawBitmap(rgbFrameBitmap, frameToCropTransform, null);

// For examining the actual TF input.
if (SAVE_PREVIEW_BITMAP) {
ImageUtils.saveBitmap(croppedBitmap);
}

//将原始图片进行剪切处理成需要的尺寸croppedBitmap

runInBackground(
new Runnable() {
@Override
public void run() {
final long startTime = SystemClock.uptimeMillis();
//进行识别
final List<Classifier.Recognition> results = classifier.recognizeImage(croppedBitmap);

lastProcessingTimeMs = SystemClock.uptimeMillis() - startTime;
LOGGER.i("Detect: %s", results);//[[838] pot (42.3%), [322] pineapple (12.4%)]
cropCopyBitmap = Bitmap.createBitmap(croppedBitmap);
if (resultsView == null) {
resultsView = (ResultsView) findViewById(R.id.results);
}
resultsView.setResults(results);
requestRender();
readyForNextImage();
}
});
}

final List<Classifier.Recognition> results = classifier.recognizeImage(croppedBitmap);

详细i 看下TensorFlowImageClassifier

1
2
3
4
5
6
7
8
9
10
11

classifier =
TensorFlowImageClassifier.create(
getAssets(),
MODEL_FILE,
LABEL_FILE,
INPUT_SIZE,
IMAGE_MEAN,
IMAGE_STD,
INPUT_NAME,
OUTPUT_NAME);

其中的几个参赛和所用的模型文件有关

1
2
3
4
5
private static final int INPUT_SIZE = 224;
private static final int IMAGE_MEAN = 117;
private static final float IMAGE_STD = 1;
private static final String INPUT_NAME = "input";
private static final String OUTPUT_NAME = "output";

主要是在 TensorFlowImageClassifier

我们有两个文件,一个是模型文件pb结尾的,一个是标签文件,txt结尾的,TensorFlowImageClassifier需要读取模型和标签文件,然后在使用模型去识别处理新的文件

1
2
3
private static final String MODEL_FILE = "file:///android_asset/tensorflow_inception_graph.pb";
private static final String LABEL_FILE =
"file:///android_asset/imagenet_comp_graph_label_strings.txt";

读取标签文件并且添加到列表中标签列表 private Vector<String> labels = new Vector<String>();

1
2
3
4
5
6
7
8
9
10
11
12
13
String actualFilename = labelFilename.split("file:///android_asset/")[1];
Log.i(TAG, "Reading labels from: " + actualFilename);
BufferedReader br = null;
try {
br = new BufferedReader(new InputStreamReader(assetManager.open(actualFilename)));
String line;
while ((line = br.readLine()) != null) {
c.labels.add(line);
}
br.close();
} catch (IOException e) {
throw new RuntimeException("Problem reading label file!" , e);
}

加载模型,查看 TensorFlowInferenceInterface内部代码,大概就是加载文件后,该文件是一个byte[] graphDef ,图标定义的学习模型,通过this.loadGraph(graphDef, this.g);得到Graph对象

1
c.inferenceInterface = new TensorFlowInferenceInterface(assetManager, modelFilename);

完成以上两步后,只要对图片进行合理的处理后,作为输入,就可以得到tensor flow模型给出的识别结果了

输入输出的一些数据定义则需要深刻理解模型的定义。The shape of the output is [N, NUM_CLASSES], where N is the batch size.

1
2
3
4
c.outputNames = new String[] {outputName};//输出
c.intValues = new int[inputSize * inputSize];//输入是一组大小为inputSize * inputSize的int数据,需要将图片信息转化为这种数据
c.floatValues = new float[inputSize * inputSize * 3];
c.outputs = new float[numClasses];

识别处理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
@Override
public List<Recognition> recognizeImage(final Bitmap bitmap) {
Log.i("linlian","recognizeImage");
// Log this method so that it can be analyzed with systrace.
Trace.beginSection("recognizeImage");

Trace.beginSection("preprocessBitmap");
// Preprocess the image data from 0-255 int to normalized float based
// on the provided parameters.
bitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
Log.i("linlian","recognizeImage intValues.length="+intValues.length);
for (int i = 0; i < intValues.length; ++i) {
final int val = intValues[i];
floatValues[i * 3 + 0] = (((val >> 16) & 0xFF) - imageMean) / imageStd;
floatValues[i * 3 + 1] = (((val >> 8) & 0xFF) - imageMean) / imageStd;
floatValues[i * 3 + 2] = ((val & 0xFF) - imageMean) / imageStd;
//Log.i("linlian"," i="+i+" "+floatValues[i * 3 + 0]+" "+floatValues[i * 3 + 0]+" "+floatValues[i * 3 + 0]);
}
Trace.endSection();

// Copy the input data into TensorFlow. 输入
Trace.beginSection("feed");
inferenceInterface.feed(inputName, floatValues, 1, inputSize, inputSize, 3);
Trace.endSection();

// Run the inference call.运行
Trace.beginSection("run");
inferenceInterface.run(outputNames, logStats);
Trace.endSection();

// Copy the output Tensor back into the output array.
Trace.beginSection("fetch");输出
inferenceInterface.fetch(outputName, outputs);
Trace.endSection();

// Find the best classifications.
PriorityQueue<Recognition> pq =
new PriorityQueue<Recognition>(
3,
new Comparator<Recognition>() {
@Override
public int compare(Recognition lhs, Recognition rhs) {
// Intentionally reversed to put high confidence at the head of the queue.
return Float.compare(rhs.getConfidence(), lhs.getConfidence());
}
});
for (int i = 0; i < outputs.length; ++i) {
if (outputs[i] > THRESHOLD) {
pq.add(
new Recognition(
"" + i, labels.size() > i ? labels.get(i) : "unknown", outputs[i], null));
}
}
final ArrayList<Recognition> recognitions = new ArrayList<Recognition>();
int recognitionsSize = Math.min(pq.size(), MAX_RESULTS);
for (int i = 0; i < recognitionsSize; ++i) {
recognitions.add(pq.poll());
}
Trace.endSection(); // "recognizeImage"
return recognitions;
}

检测 detector

关于检测

加载的模型是由三种 可选 multi box,使用旧的API训练的模型

1
2
3
4
5
6
private static final String MB_INPUT_NAME = "ResizeBilinear";
private static final String MB_OUTPUT_LOCATIONS_NAME = "output_locations/Reshape";
private static final String MB_OUTPUT_SCORES_NAME = "output_scores/Reshape";
private static final String MB_MODEL_FILE = "file:///android_asset/multibox_model.pb";
private static final String MB_LOCATION_FILE =
"file:///android_asset/multibox_location_priors.txt";

另外一部分 tensor flow object detect

1
2
3
4
private static final int TF_OD_API_INPUT_SIZE = 300;
private static final String TF_OD_API_MODEL_FILE =
"file:///android_asset/ssd_mobilenet_v1_android_export.pb";
private static final String TF_OD_API_LABELS_FILE = "file:///android_asset/coco_labels_list.txt";

还有yolo??

1
2
3
4
5
6
7
8
9
10
11

// Configuration values for tiny-yolo-voc. Note that the graph is not included with TensorFlow and
// must be manually placed in the assets/ directory by the user.
// Graphs and models downloaded from http://pjreddie.com/darknet/yolo/ may be converted e.g. via
// DarkFlow (https://github.com/thtrieu/darkflow). Sample command:
// ./flow --model cfg/tiny-yolo-voc.cfg --load bin/tiny-yolo-voc.weights --savepb --verbalise
private static final String YOLO_MODEL_FILE = "file:///android_asset/graph-tiny-yolo-voc.pb";
private static final int YOLO_INPUT_SIZE = 416;
private static final String YOLO_INPUT_NAME = "input";
private static final String YOLO_OUTPUT_NAMES = "output";
private static final int YOLO_BLOCK_SIZE = 32;

yolo 是一个实时物体识别的

You only look once (YOLO) is a state-of-the-art, real-time object detection system. On a Pascal Titan X it processes images at 30 FPS and has a mAP of 57.9% on COCO test-dev.

主要看下tensorflow TF_OD 相关的

coco_labels_list.txt 标签文件,人啊,自行车啊,识别

ssd_mobilenet_v1_android_export.pb 模型文件

1
2
3
4
tracker = new MultiBoxTracker(this);
detector = TensorFlowObjectDetectionAPIModel.create(
getAssets(), TF_OD_API_MODEL_FILE, TF_OD_API_LABELS_FILE, TF_OD_API_INPUT_SIZE);
cropSize = TF_OD_API_INPUT_SIZE;

模型的读取和输入输出定义 TensorFlowObjectDetectionAPIModel

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
public static Classifier create(
final AssetManager assetManager,
final String modelFilename,
final String labelFilename,
final int inputSize) throws IOException {
final TensorFlowObjectDetectionAPIModel d = new TensorFlowObjectDetectionAPIModel();

InputStream labelsInput = null;
String actualFilename = labelFilename.split("file:///android_asset/")[1];
labelsInput = assetManager.open(actualFilename);
BufferedReader br = null;
br = new BufferedReader(new InputStreamReader(labelsInput));
String line;
while ((line = br.readLine()) != null) {
LOGGER.w(line);
d.labels.add(line);//逐行读取标签文件
}
br.close();


d.inferenceInterface = new TensorFlowInferenceInterface(assetManager, modelFilename);

final Graph g = d.inferenceInterface.graph();

d.inputName = "image_tensor";//输入的shap定义
// The inputName node has a shape of [N, H, W, C], where
// N is the batch size
// H = W are the height and width
// C is the number of channels (3 for our purposes - RGB)
final Operation inputOp = g.operation(d.inputName);
if (inputOp == null) {
throw new RuntimeException("Failed to find input Node '" + d.inputName + "'");
}
d.inputSize = inputSize;
// The outputScoresName node has a shape of [N, NumLocations], where N
// is the batch size. 三个输出
final Operation outputOp1 = g.operation("detection_scores");
if (outputOp1 == null) {
throw new RuntimeException("Failed to find output Node 'detection_scores'");
}
final Operation outputOp2 = g.operation("detection_boxes");
if (outputOp2 == null) {
throw new RuntimeException("Failed to find output Node 'detection_boxes'");
}
final Operation outputOp3 = g.operation("detection_classes");
if (outputOp3 == null) {
throw new RuntimeException("Failed to find output Node 'detection_classes'");
}

// Pre-allocate buffers.
d.outputNames = new String[] {"detection_boxes", "detection_scores",
"detection_classes", "num_detections"};
d.intValues = new int[d.inputSize * d.inputSize];
d.byteValues = new byte[d.inputSize * d.inputSize * 3];
d.outputScores = new float[MAX_RESULTS];
d.outputLocations = new float[MAX_RESULTS * 4];
d.outputClasses = new float[MAX_RESULTS];
d.outputNumDetections = new float[1];
return d;
}

识别

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
@Override
public List<Recognition> recognizeImage(final Bitmap bitmap) {
// Log this method so that it can be analyzed with systrace.
Trace.beginSection("recognizeImage");

Trace.beginSection("preprocessBitmap");
// Preprocess the image data from 0-255 int to normalized float based
// on the provided parameters.
bitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());

for (int i = 0; i < intValues.length; ++i) {//图片数据处理
byteValues[i * 3 + 2] = (byte) (intValues[i] & 0xFF);
byteValues[i * 3 + 1] = (byte) ((intValues[i] >> 8) & 0xFF);
byteValues[i * 3 + 0] = (byte) ((intValues[i] >> 16) & 0xFF);
}
Trace.endSection(); // preprocessBitmap

// Copy the input data into TensorFlow.输入
Trace.beginSection("feed");
inferenceInterface.feed(inputName, byteValues, 1, inputSize, inputSize, 3);
Trace.endSection();

// Run the inference call.运行
Trace.beginSection("run");
inferenceInterface.run(outputNames, logStats);
Trace.endSection();

// Copy the output Tensor back into the output array.结果输出
Trace.beginSection("fetch");
outputLocations = new float[MAX_RESULTS * 4];
outputScores = new float[MAX_RESULTS];
outputClasses = new float[MAX_RESULTS];
outputNumDetections = new float[1];
inferenceInterface.fetch(outputNames[0], outputLocations);
inferenceInterface.fetch(outputNames[1], outputScores);
inferenceInterface.fetch(outputNames[2], outputClasses);
inferenceInterface.fetch(outputNames[3], outputNumDetections);
Trace.endSection();

// Find the best detections.
final PriorityQueue<Recognition> pq =
new PriorityQueue<Recognition>(
1,
new Comparator<Recognition>() {
@Override
public int compare(final Recognition lhs, final Recognition rhs) {
// Intentionally reversed to put high confidence at the head of the queue.
return Float.compare(rhs.getConfidence(), lhs.getConfidence());
}
});

// Scale them back to the input size.
for (int i = 0; i < outputScores.length; ++i) {
final RectF detection =
new RectF(
outputLocations[4 * i + 1] * inputSize,
outputLocations[4 * i] * inputSize,
outputLocations[4 * i + 3] * inputSize,
outputLocations[4 * i + 2] * inputSize);
pq.add(
new Recognition("" + i, labels.get((int) outputClasses[i]), outputScores[i], detection));
}

final ArrayList<Recognition> recognitions = new ArrayList<Recognition>();
for (int i = 0; i < Math.min(pq.size(), MAX_RESULTS); ++i) {
recognitions.add(pq.poll());
}
Trace.endSection(); // "recognizeImage"
return recognitions;
}

训练模型

运行手写识别

只需要直接运行fully_connected_feed.py文件,就可以开始训练:

python fully_connected_feed.py

1
2
3
4
5
6
7
Traceback (most recent call last):

File "fully_connected_feed.py", line 279, in <module>

tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)

TypeError: run() got an unexpected keyword argument 'main'

版本太低?

Pip3 install tensorflow==1.4.0

1
(venv) zowee-laiscdeMacBook-Pro:tensorflow zowee-laisc$ pip3 install --upgrade tensorflow==1.6.0

TensorFlow Python API 依赖 Python 2.7 版本.

yong pycharm 新建2.7环境的python项目

激活

1
 source venv/bin/activate

安装tensor flow

1
pip install --upgrade tensorflow==1.6.0

到下载的tensorflow

1
tensorflow/tensorflow/tensorflow/examples/tutorials/mnist

运行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
(venv) zowee-laiscdeMacBook-Pro:mnist zowee-laisc$ python fully_connected_feed.py 
Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Extracting /tmp/tensorflow/mnist/input_data/train-images-idx3-ubyte.gz
Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Extracting /tmp/tensorflow/mnist/input_data/train-labels-idx1-ubyte.gz
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Extracting /tmp/tensorflow/mnist/input_data/t10k-images-idx3-ubyte.gz
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting /tmp/tensorflow/mnist/input_data/t10k-labels-idx1-ubyte.gz
2018-05-22 16:15:26.976082: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Step 0: loss = 2.32 (0.215 sec)
Step 100: loss = 2.13 (0.001 sec)
Step 200: loss = 1.96 (0.001 sec)
Step 300: loss = 1.70 (0.001 sec)
Step 400: loss = 1.31 (0.001 sec)
Step 500: loss = 1.11 (0.001 sec)
Step 600: loss = 0.89 (0.001 sec)
Step 700: loss = 0.87 (0.001 sec)
Step 800: loss = 0.70 (0.001 sec)
Step 900: loss = 0.64 (0.001 sec)
Training Data Eval:
Num examples: 55000 Num correct: 46676 Precision @ 1: 0.8487
Validation Data Eval:
Num examples: 5000 Num correct: 4264 Precision @ 1: 0.8528
Test Data Eval:
Num examples: 10000 Num correct: 8526 Precision @ 1: 0.8526
Step 1000: loss = 0.55 (0.012 sec)
Step 1100: loss = 0.58 (0.123 sec)
Step 1200: loss = 0.40 (0.001 sec)
Step 1300: loss = 0.49 (0.001 sec)
Step 1400: loss = 0.37 (0.001 sec)
Step 1500: loss = 0.70 (0.001 sec)
Step 1600: loss = 0.40 (0.001 sec)
Step 1700: loss = 0.24 (0.001 sec)
Step 1800: loss = 0.31 (0.001 sec)
Step 1900: loss = 0.39 (0.001 sec)
Training Data Eval:
Num examples: 55000 Num correct: 49053 Precision @ 1: 0.8919
Validation Data Eval:
Num examples: 5000 Num correct: 4509 Precision @ 1: 0.9018
Test Data Eval:
Num examples: 10000 Num correct: 8971 Precision @ 1: 0.8971
(venv) zowee-laiscdeMacBook-Pro:mnist zowee-laisc$

启动tensorboard

1
2
(venv) zowee-laiscdeMacBook-Pro:mnist zowee-laisc$ tensorboard --logdir=logs/
TensorBoard 1.6.0 at http://zowee-laiscdeMacBook-Pro.local:6006 (Press CTRL+C to quit)

使用Android studio导入open cv 示例代码

使用Android studio导入opencv几个sample示例代码

  • calibration
  • colorblob
  • facedection

其中facedection用到jni,环境配置需要配置ndk,以及编写cmakelist文件。

简单导入

  1. 下载opencv sdk

    下载 opencv 的 Android sdk,并解压到电脑某个路径

  2. 模块导入opencv sdk

    参考 opencv 开发环境搭建 将sdk/java中的代码通过 file-new-Import Module导入。

  3. 添加项目依赖

    在 file-Project Structure中选择应用模块,在dependencies 标签下添加dependence,选择刚才导入的opencv模块openCVLibrary341

  4. 添加jniLibs

    如果此时项目不加jniLibs,项目可以编译,但是运行时,回先提示安装open cv manager。就是要另外安装 sdk 目录中apk文件夹下对应的apk,例如 OpenCV_3.4.1_manager_arm64-v8a.apk。

    如果不想要另外安装的,则需要将 sdk-native-libs下的 so 文件复制导入项目的jniLibs中。

    这样编译之后的apk比较大,要好几十M。

    如果是简单的使用opencv 的sdk,例如sample中的color bob示例,这这里的配置就可以了,以下是涉及到ndk编程,项目需要编写cpp文件然后再参与编译,例如示例代码中的facedetection。

涉及jni

  1. 下载ndk

    File-Project structure - sdk Location。 查看Android NDK location,如果为空,点击底部的Download 即可自动下载,并配置好nkd的位置,如 …../sdk/android-sdk-macosx/ndk-bundle

    还需要下载cmake:Android studio打开preference对话框,查找 android sdk,在sdk tools标签下勾选下载 cmake。

  2. c/c++源码文件夹

    在main目录下创建cpp文件夹,用于存放cpp文件。

  3. 编写CMakeLists.txt

    cmake文件可以参考在Android studio中新建一个c++ support项目而自动生成的cmakelist文件。参考如下:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54

    cmake_minimum_required(VERSION 3.4.1)

    #include open cv sdk中的jni include路径,不然
    include_directories(/Users/zowee-laisc/lynn/opencv/OpenCV-android-sdk/sdk/native/jni/include)

    #导入项目jniLibs文件夹下载 so库
    add_library( # Sets the name of the library.
    lib_opencv

    # Sets the library as a shared library.
    SHARED

    IMPORTED)

    set_target_properties(lib_opencv
    PROPERTIES IMPORTED_LOCATION
    ${CMAKE_SOURCE_DIR}/src/main/jniLibs/${ANDROID_ABI}/libopencv_java3.so)

    #编译添加自己的jni 库
    add_library( # Sets the name of the library.
    detection_based_tracker

    # Sets the library as a shared library.
    SHARED

    # Provides a relative path to your source file(s).
    src/main/cpp/DetectionBasedTracker_jni.cpp )

    # Searches for a specified prebuilt library and stores the path as a
    # variable. Because CMake includes system libraries in the search path by
    # default, you only need to specify the name of the public NDK library
    # you want to add. CMake verifies that the library exists before
    # completing its build.

    find_library( # Sets the name of the path variable.
    log-lib

    # Specifies the name of the NDK library that
    # you want CMake to locate.
    log )

    # Specifies libraries CMake should link to your target library. You
    # can link multiple libraries, such as libraries you define in this
    # build script, prebuilt third-party libraries, or system libraries.

    target_link_libraries( # Specifies the target library.
    detection_based_tracker

    lib_opencv

    # Links the target library to the log library
    # included in the NDK.
    ${log-lib} )
  1. 修改gradle文件

    修改应用gradle文件,主要是

    1
    2
    3
    4
    5
    6
    7
    externalNativeBuild {
    cmake {
    cppFlags "-frtti -fexceptions"
    abiFilters 'x86', 'x86_64', 'armeabi-v7a', 'arm64-v8a'
    arguments "-DANDROID_STL=gnustl_static"
    }
    }

    完整文件如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
apply plugin: 'com.android.application'

android {
compileSdkVersion 27



defaultConfig {
applicationId "com.zowee.facedection"
minSdkVersion 21
targetSdkVersion 27
versionCode 1
versionName "1.0"

testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"

externalNativeBuild {
cmake {
cppFlags "-frtti -fexceptions"
abiFilters 'x86', 'x86_64', 'armeabi-v7a', 'arm64-v8a'
arguments "-DANDROID_STL=gnustl_static"
}
}
}

sourceSets {
main {
jniLibs.srcDirs = ['src/main/jniLibs']
}
}

buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}

externalNativeBuild {
cmake {
path "CMakeLists.txt"
}
}

}

dependencies {
implementation fileTree(include: ['*.jar'], dir: 'libs')
implementation 'com.android.support:appcompat-v7:27.1.1'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'com.android.support.test:runner:1.0.2'
androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2'
implementation project(':openCVLibrary341')
}
  1. 注意事项

    构建项目的时候,遇到的主要报错信息和解决方案

    • 用<>来导入文件出错,该用双引号导入

      1
      Error:(1, 10) error: 'DetectionBasedTracker_jni.h' file not found with <angled> include; use "quotes" instead
  • 无法找到 opencv2/core.hpp

    1
    2
    Error:(2, 10) fatal error: 'opencv2/core.hpp' file not found
    Error:(2, 10) fatal error: 'opencv2/core.hpp' file not found

    在cmakelist文件中添加

    1
    include_directories(/Users/zowee-laisc/lynn/opencv/OpenCV-android-sdk/sdk/native/jni/include)
  • Linker命令错误

    1
    2
    3
    4
    5
    6
    /Measurebox2/facedection/src/main/cpp/DetectionBasedTracker_jni.cpp
    Error:(36) undefined reference to `cv::CascadeClassifier::detectMultiSc
    Error:error: linker command failed with exit code 1 (use -v to see invocation)

    或者
    Error:(36) undefined reference to `cv::CascadeClassi

    解决方案

    检查gradle文件

    1
    2
    3
    4
    5
    cmake {
    cppFlags "-frtti -fexceptions"
    abiFilters 'x86', 'x86_64', 'armeabi-v7a', 'arm64-v8a'
    arguments "-DANDROID_STL=gnustl_static"
    }
  • 删除部分libs文件夹

    1
    2
    3

    Error:Execution failed for task ':facedection:transformNativeLibsWithStripDebugSymbolForDebug'.
    > A problem occurred starting process 'command '/Users/zowee-laisc/lynn/sdk/android-sdk-macosx/ndk-bundle/toolchains/mips64el-linux-android-4.9/prebuilt/darwin-x86_64/bin/mips64el-linux-android-strip''

    解决方法将jniLibs下的mips64的so 库文件夹删除

scrapy 入门

创建项目

1
2
3
#创建项目后,在终端进入项目录入
# source venv/bin/activate 来激活环境,最前面出现 (venv)
# (venv) zowee-laiscdeMacBook-Pro:scrapytest zowee-laisc$ pip install Scrapy 然后安装Scrapy

激活虚拟环境后,如果要退出活着删除可以使用如下命令

退出虚拟环境$ deactivate
删除虚拟环境
rm -r venv

创建scrapy项目 scrapy startproject tutorial

1
2
3
4
5
6
7
(venv) zowee-laiscdeMacBook-Pro:scrapytest zowee-laisc$ scrapy startproject tutorial
New Scrapy project 'tutorial', using template directory '/Users/zowee-laisc/lynn/pycharmProject/scrapytest/venv/lib/python3.6/site-packages/scrapy/templates/project', created in:
/Users/zowee-laisc/lynn/pycharmProject/scrapytest/tutorial

You can start your first spider with:
cd tutorial
scrapy genspider example example.com

示例

  1. 生成爬虫类

    生成一个spider爬虫类 scrappy genspider spiderName xxxxdomain

    会在spider文件夹下生成一个 spiderXXX.py文件

    示例:爬一下公司内网的buglist

  2. 定义item字段

    定义我们要处理哪些数据在 items.py文件中添加如下定义

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    import scrapy
    class ChandaoItem(scrapy.Item):
    # define the fields for your item here like:
    #bug title
    title = scrapy.Field()
    #严重等级
    severity = scrapy.Field()
    #bug 发现者
    founder = scrapy.Field()
    #当前责任人
    current = scrapy.Field()

  3. 登录数据

    一般网站都要账号密码登录流程

    在进入请求的时候,将用户数据封好后,再返回给引擎,要求请求完成后由after_login来处理。

    在after_login中在重新请求到我们需要爬取的页面,这个时候,已经是登录状态了,然后再将请求后的数据交给parse_buglist处理

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    def parse(self, response):
    return scrapy.FormRequest.from_response(
    response,
    formdata={'account':'linlian','password':'XXXX'},
    callback = self.after_login
    )
    def after_login(self,response):
    print("after......")
    print(response.xpath('//script').extract()[0])
    #with open("body.txt", 'wb') as f:
    # f.write(response.body)
    #b"<script>parent.location='/zentao/index.html';\n\n</script>\n"
    return scrapy.Request('http://192.168.2.27/zentao/bug-browse-21.html',
    callback= self.parse_buglist)

  4. 页面解析

    拿到页面数据后,通过xpath拿到我们需要的节点,进行迭代解析我们需要的数据,然后通过页面分析拿到需要进一步请求的连接地址

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    def parse_buglist(self,response):
    print("parse_buglist")


    node_list = response.xpath("//tr[@class='text-center']")
    for node in node_list:
    item = ChandaoItem()
    item['severity'] = node.xpath("./td[2]/span/text()").extract()[0]
    item['title'] = node.xpath("./td[4]/a/text()").extract()[0]
    item['founder'] = node.xpath("./td[5]/text()").extract()[0]
    item['current'] = node.xpath("./td[6]/text()").extract()[0]
    yield item

    try:
    url = response.xpath("//i[@class='icon-play']/../@href").extract()[0]
    print(url)
    if len(url) !=0:
    yield scrapy.Request('http://192.168.2.27'+url,callback=self.parse_buglist)
    except IndexError:
    print("Get next Error")

  5. 数据保存

    运行爬虫 scrapey crawl Buglist -o items.json 并将结果保存在 items.json

敏捷开发 小记

第一部分 敏捷开发

人与人之间的交互式复杂的,并且从其效果从来都是难以预期,但却是工作中最为重要的方面

—人件

构建起具有合作精神的自组织团队

教堂尖顶傻姑娘的风标,即使由钢铁制成,如果不懂得顺应风势的艺术,一样会被风暴立即摧毁

——海因里希-海涅

让软件团队具有快速工作、响应变化能力的价值观和原则。

敏捷宣言

个体和交互 胜过 过程和工具

可以工作的软件 胜过 面面俱到的文档

客户合作 胜过 合同谈判

响应变化 胜过 遵循计划

为下两周的做详细计划,为下三个月做粗略计划,再为以后做极为粗糙的计划。

敏捷原则:

  1. 我们最优先要做的事尽早的,持续的交付有价值的软件来使客户满意
  2. 即使到开发的后期,也欢迎改变需求。敏捷过程利用变化来为客户创建竞争优势。
  3. 经常性的交付可以工作的软件,交付间隔可以从几周到几个月,交付时间间隔越短越好。
  4. 整个项目开发期间,业务人员和开发人员必须天天在一起工作。
  5. 围绕被激励起来的个人来构建项目,给他们提供所需要的环境和支持,并且信任他们能够完成工作。
  6. 团队内部,最具有效果且富有效率的传递信息的方法就是,面对面的交谈。
  7. 工作的软件事收腰的进度度量标准
  8. 敏捷过程提倡可持续的开发速度,责任人,开发者,用户应该能够保持一个长期的,恒定的开发速度。
  9. 不断的关注优秀的技能和好的设计会增强敏捷能力
  10. 简单—使未完成的工作最大化的艺术—是根本。简单
  11. 最好的架构、需求和设计出自于自组织的团队
  12. 每隔和一段时间,团队会在如何更有效的工作方面进行反省,相对应的对自己的行为进行调整。

极限编程

xp,极限编程,是敏捷方法中的最著名的一个。

和客户反复讨论,以获取对于需求细节的理解,但不去捕获那些细节。

重构;不能容忍代码重复

重构是持续进行的,不是在项目结束,版本发布的时候

而是我们,要在每隔一个小时,或者半个小时就要去做的事情!!!

隐喻

极限编程是一组简单、具体的实践,这些实践结合起来形成了敏捷开发的过程。

计划

初识探索:客户会不断编写新的用户素材。

开发人员对这些素材进行估算,估算是相对的,不是绝对的。

大的素材进行拆分,过小的素材进行合并

用户能够安全的进行存款、取款、转账等活动

拆分为:

  • 用户登录
  • 用户退出
  • 用户向其账户存款
  • 用户从其账户取款
  • 用户从其账户向其他账户转账

任务计划

一个任务,是一个开发人员能够在4~16个小时内实现的一些功能。

在迭代中的点;我们希望看到拥有一半素材点数的完整素材被完成。而不是完成了百分之90的素材点,确没有一个完整的素材被完成。

测试

烈火验真金,逆境磨意志——卢修斯-塞尼加

测试驱动

Mock object模式。测试方法

白盒测试

黑盒测试

重构

代码可读性,

去掉重复的代码

提炼

一次编程实践

如果还没必要引入新的类,先用简单参数。add(int) 二不是 add(Throw)这样。

单一职责原则。game接受投掷也知道如何计算每轮的得分,好像有点违反了单一职责原则,如果增加一个scorer对象呢??暂时先放着。因为我们还没想好把分数计算放在哪里

先把代码写的更容易让人理解。

各种测试例过了之后,看到乱乱的程序,再来重构一下??

从测试开始,至上而下

第二部分 敏捷设计

全局视。图和软件一起演化,在每次迭代中,团队改进系统设计,使设计尽可能适合于当前系统。不会花很多时间去预测未来的需求和需要,也不会试图在今天就构建一些基础结构去支撑那些他们认为明天才回需要的特性。

拙劣的设计:

  • 僵硬化:设计难以改变

    单一的改动导致有依赖关系的模块的连锁改动,必须要改动的模块越多,表示设计越僵硬化

  • 脆弱性:设计易于遭到破坏

    一个地方修改后,许多地方就可能出现问题

  • 牢固性:设计难以重用

  • 粘滞性:难以做正确的事情

  • 不必要的复杂性:过分设计

  • 不必要的重复:滥用鼠标

  • 晦涩性:混乱的表达

原则:

  • 单一职责原则 the single responsibility principle
  • 开放-封闭原则 the open-close principle
  • Liskov替换原则 the Liskov substitution principle
  • 依赖倒置原则 the dependency inversion principle
  • 接口隔离原则 the interface segregation interface

敏捷开发人员如何知道要做什么?

  1. 遵循敏捷实践去发现问题;
  2. 应用设计原则去诊断问题
  3. 应用适当的设计模式去解决问题

单一职责原则 simple responsibility principle

内聚性——一个模块组成元素之间的功能相关性。

就一个类而言,应该仅有一个引起它变化的原因;

例如把game 和 scorer分离,之前game即要跟踪当前比赛,又要计算得分。每个职责的变化都是一个轴线。如果一个类承担的多于一个职责,那么引起这个类变化的原因就有多个

如果一个类承担的职责过多,就等于把这些职责耦合在一起了。一个职责的变化可能会抑制这个类完成其他职责的能力。

开放-封闭原则

对于扩展,是开放的

对于更改,是封闭的

关键是抽象!!!

Liskov替换原则

子类型必须能够替换掉他们的基类型

从行为的角度看 A是不是 is-a B,例如我们认为正方形是一种特殊的长方形。

但在结算面积的时候,设置宽为4,长为5,长方形的面积是4*5,而如果设置了正方形长为4,宽为5,面积,,,就不是4*5这么算的了

提取公共部分

派生类中添加了其他类不会抛出的异常,也是违反了lsp原则

Is-a含义过于宽广,子类型的正确定义是:可替换性的“

依赖倒置原则 dip

高层模块不应该依赖于低层模块,二者都应该依赖于抽象

抽象不应该依赖于细节,细节应该依赖于抽象

  • 任何变量都不应该持有一个指向具体类的指针或者引用
  • 任何类都不应该从高具体类派生
  • 任何方法都不应该复写它的任何基类中已经实现的方法

高层策略需要和低层实现进行分离

接口隔离原则

如果类的借口不是内聚的,就表示该类具有胖的接口

第三部分 薪水支付案例研究

系统需求点

用例分析,列举,异常情况

介绍熟悉一些常见设计模式,monostate倒是之前没见过的模式,从行为上的单例

第一次迭代开始

  • 规格说明,与客户交谈中的一些记录,基本需求

  • 基于用用例分析,考虑下系统的行为

    进行用例分析的时候,关注素材和验收测试,找出系统的用户会执行的操作种类。

    通过用例设计了系统核心模型图

  • 实现,uml作为交流媒介,草图

    测试优先

  • 打包,包的设计原则

    重用发布等价原则、共同重用原则、共同封闭原则、包内聚兴总结、稳定性包的耦合性和原则、

气象站案例研究

todo

ETS案例研究

todo

wave动画效果

Wave动画效果

项目地址:Github

动画效果:

image

实现波浪wave效果,以及水位位置调整。

代码实现

外部形状

外部形状可以是圆形,三角形,整个图形的绘制区域由canvas决定

初始化paint

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16

private void initBoarderPaint() {
//默认的style是fill,填充的
mBoarderPain = new Paint();
mBoarderPain.setAntiAlias(true);
mBoarderPain.setStyle(Paint.Style.STROKE);//描边
mBoarderPain.setStrokeWidth(DEFAULT_BOARD_WIDTH);
mBoarderPain.setColor(Color.RED);
}

private void initBgPaint() {
//默认的style是fill,填充的
mBgPaint = new Paint();
mBgPaint.setAntiAlias(true);
mBgPaint.setColor(getResources().getColor(R.color.voicebg));
}

在onDraw中绘制

1
2
3
4
5
6
7
8
9
@Override
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
float boaderwidth = DEFAULT_BOARD_WIDTH;

canvas.drawCircle(getWidth() / 2f, getHeight() / 2f, (getWidth() - boaderwidth) / 2f - 1f, mBoarderPain);//画边框
canvas.drawCircle(getWidth() / 2f, getHeight() / 2f, getWidth() / 2f - boaderwidth, mBgPaint);//画圆形背景

}

怎么画波浪呢?

初始化paint

1
2
3
4
5
6
private void initWavePaint() {
mWavePain = new Paint();
mWavePain.setAntiAlias(true);
mWavePain.setColor(getResources().getColor(R.color.voicefr));
updateWaveShader();
}

其中updateWaveShader,使用的着色器是BitmapShader。

1
2
3
4
5
6
7
private void updateWaveShader() {
if (getWaveBitmap() != null) {
mWaveShader = new BitmapShader(getWaveBitmap(), Shader.TileMode.REPEAT, Shader.TileMode.CLAMP);//x坐标repeat模式,y方向上最后一个像素重复
mWavePain.setShader(mWaveShader);
} else {
mWavePain.setShader(null);
}

而getWaveBitmap()的任务是获得一个画有两个波浪的图形

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
private Bitmap getWaveBitmap() {//返回一个波形的图案,两条线

int wavecount = 1;//容纳多少个完整波形


int width = getMeasuredWidth();
int height = getMeasuredHeight();
Log.i("linlian", "width=" + width + " height=" + height);
if (width > 0 && height > 0) {

Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Canvas waveLineCanvas = new Canvas(bitmap);


//坐标点数组,x从0-width,y=Asin(Wx+Q)+H
// W = 2*PI/width
// A = height
// h = height
// Q = 0
final int endX = width + 1;
final int endY = height + 1;
float W = (float) (2f * Math.PI * wavecount / width);
float A = height / 10f;//波浪幅度在整体的10分之一
float H = height / 2;//默认水位在一半的位置
float[] waveY = new float[endX];

for (int x = 0; x < endX; x++) {
waveY[x] = (float) (A * Math.sin(W * x)) + H;
}

int xShift = width / 4;
mWavePaint.setColor(getResources().getColor(R.color.wavebg));
for (int x = 0; x < endX; x++) {
waveLineCanvas.drawLine(x, waveY[(x + xShift) % endX], x, endY, mWavePaint);// .:|:. 像这样画线
}
mWavePaint.setColor(getResources().getColor(R.color.wavefr));
for (int x = 0; x < endX; x++) {

waveLineCanvas.drawLine(x, waveY[x], x, endY, mWavePaint);// .:|:. 像这样画线
}

return bitmap;
}
return null;
}

设置好图形着色器之后,在view的onDraw方法上,添加

1
2
3
4
5
6
7
8
9
10
11
@Override
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
float boaderwidth = DEFAULT_BOARD_WIDTH;

canvas.drawCircle(getWidth() / 2f, getHeight() / 2f, (getWidth() - boaderwidth) / 2f - 1f, mBoarderPain);//画边框
canvas.drawCircle(getWidth() / 2f, getHeight() / 2f, getWidth() / 2f - boaderwidth, mBgPaint);//画圆形背景


canvas.drawCircle(getWidth() / 2f, getHeight() / 2f, getWidth() / 2f - boaderwidth, mWavePain);//画波浪图形
}

如何实现动画效果呢

通过属性动画来实现,改变waveXshift的值,从0到1变化,两秒,重复变化

1
2
3
4
5
6
7
8
9
10
private void initAnimation() {
mShaderMatrix = new Matrix();
ObjectAnimator waveXshiftAnimator = ObjectAnimator.ofFloat(this, "waveXshift", 0f, 1f);
waveXshiftAnimator.setRepeatCount(ValueAnimator.INFINITE);
waveXshiftAnimator.setDuration(2000);
waveXshiftAnimator.setInterpolator(new LinearInterpolator());
mAnimatorset = new AnimatorSet();
mAnimatorset.play(waveXshiftAnimator);

}

属性变化的时候,重新绘制

1
2
3
4
5
6
7
public void setWaveXshift(float mWaveXshift) {
if (this.mWaveXshift != mWaveXshift) {
this.mWaveXshift = mWaveXshift;
//变化的是重新绘制view,实现动画效果
invalidate();
}
}

在onDraw中,实现图像的水平移动 mWaveXshift * getWidth();

1
2
3
4
5
6
7
8
9
10
11
if (mWaveShader != null) {
if (mWavePaint.getShader() == null) {
mWavePaint.setShader(mWaveShader);
}
float dx = mWaveXshift * getWidth();
mShaderMatrix.setTranslate(dx, 0);//平移波浪,实现推进效果
mWaveShader.setLocalMatrix(mShaderMatrix);
} else {
mWavePaint.setShader(null);
}
canvas.drawCircle(getWidth() / 2f, getHeight() / 2f, getWidth() / 2f - boaderwidth, mWavePaint);//画波浪

重新实现的代码可以参考

https://github.com/lynn8570/VoiceView/blob/master/animlib/src/main/java/anim/lynn/voice/VoiceWave.java

常见动画效果

引导动画

项目地址:Github

动画效果:

引导动画

这个库使用起来还是非常方便的:例如在点击的时候,就可以设置某个视图的动画效果

1
2
3
4
5
6
7
8
9
10
11
findViewById(R.id.submit).setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {

YoYo.with(Techniques.Tada)
.duration(700)
.playOn(findViewById(R.id.edit_area));

t.setText("Wrong password!");
}
});

YoYo.with(Techniques.Tada)返回一个new AnimationComposer(techniques)的对象,主要用于设置动画的参数。

playOn(target);的时候,将参数设置到animator中并通过animator.setTarget(target);设置作用对象,然后调用play,实际上就是设置好动画参数,然后执行animator.animate()开始动画

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29

private BaseViewAnimator play() {
animator.setTarget(target);

if (pivotX == YoYo.CENTER_PIVOT) {
ViewCompat.setPivotX(target, target.getMeasuredWidth() / 2.0f);
} else {
target.setPivotX(pivotX);
}
if (pivotY == YoYo.CENTER_PIVOT) {
ViewCompat.setPivotY(target, target.getMeasuredHeight() / 2.0f);
} else {
target.setPivotY(pivotY);
}

animator.setDuration(duration)
.setRepeatTimes(repeatTimes)
.setRepeatMode(repeatMode)
.setInterpolator(interpolator)
.setStartDelay(delay);

if (callbacks.size() > 0) {
for (Animator.AnimatorListener callback : callbacks) {
animator.addAnimatorListener(callback);
}
}
animator.animate();
return animator;
}

来挑几个基本的动画实现或者几个有意思的实现来看看吧。

所以textview的demo动画有几个基本的设置默认一些参数,执行时间为1200,无限循环,中心点,

1
2
3
4
.duration(1200)
.repeat(YoYo.INFINITE)
.pivot(YoYo.CENTER_PIVOT, YoYo.CENTER_PIVOT)
.interpolate(new AccelerateDecelerateInterpolator())

弹性掉落

DropOutnAnimator

1
2
3
4
5
6
7
8
9
10
public class DropOutAnimator extends BaseViewAnimator {
@Override
protected void prepare(View target) {
int distance = target.getTop() + target.getHeight();
getAnimatorAgent().playTogether(
ObjectAnimator.ofFloat(target, "alpha", 0, 1),
Glider.glide(Skill.BounceEaseOut, getDuration(), ObjectAnimator.ofFloat(target, "translationY", -distance, 0))
);
}
}

prepare()方法在设置setTarget的时候被调用,这里主要用于AnimatorSet的设置

1
2
3
4
5
public BaseViewAnimator setTarget(View target) {
reset(target);
prepare(target);
return this;
}

DropOutAnimator在prepare的时候设置了两个动画,一个是透明度从从0到1,这个比较好理解,在透明度变化的同时,有个y轴位置上的偏移动画,这个y轴的位置偏移不是简单的线性偏移,而是带有球回弹效果的偏移,这种偏移怎么实现的呢?

先看ObjectAnimator.ofFloat(target, “translationY”, -distance, 0)表示将translationY属性从-171到0的一个变化,-171的时候超出屏幕之外,不可见,然后慢慢向下移,移动到view的top为0的位置,即view在屏幕的top位置。

再看Glider.glide(Skill.BounceEaseOut, getDuration(), ValueAnimator XXXX),表示将 的这种TypeEvaluator通过animator.setEvaluator(TypeEvaluator t);设置到ValueAnimator XXXXanimator中,这个animator就是前面说的translationY偏移动画中。这种组合将产生奇妙的效果,那Skill.BounceEaseOut具体对translationY的偏移有什么影响呢?

TypeEvaluator的核心在于evaluate方法,子类通过override该方法,通过参数fraction因素和startValue\endValue来计算中间值

1
2
3
public interface TypeEvaluator<T> {
T evaluate(float fraction, T startValue, T endValue);
}

DropOutAnimator的集成关系是:DropOutAnimator->BaseEasingMet->TypeEvaluator,其中BaseEasingMet对evaluate进行了

1
2
3
4
5
6
7
8
9
10
11
12
13
14
@Override
public final Float evaluate(float fraction, Number startValue, Number endValue){
float t = mDuration * fraction;
float b = startValue.floatValue();
float c = endValue.floatValue() - startValue.floatValue();
float d = mDuration;
float result = calculate(t,b,c,d);
for(EasingListener l : mListeners){
l.on(t,result,b,c,d);
}
return result;
}

public abstract Float calculate(float t, float b, float c, float d);

其中,算法分析如下:

1
2
3
4
5
6
7
8
9
10
11
12
public  Float calculate(float t, float b, float c, float d){
Log.i("DropOutAnimator","time="+t+"start b="+b+"length c="+c+"duration="+d);
if ((t/=d) < (1/2.75f)) {//f=0.36
return c*(7.5625f*t*t) + b;
} else if (t < (2/2.75f)) {//f=0.72727272
return c*(7.5625f*(t-=(1.5f/2.75f))*t + .75f) + b;
} else if (t < (2.5/2.75)) {
return c*(7.5625f*(t-=(2.25f/2.75f))*t + .9375f) + b;
} else {
return c*(7.5625f*(t-=(2.625f/2.75f))*t + .984375f) + b;
}
}

第一个阶段f从0-0.36的时候,将参数带入171*7.5625*x*x-171得到-171-0的值,完成了y坐标从-171到0的drop过程。

第二个阶段0.36-0.7272,带入参数为171*(7.5625*(x-0.54)*(x-0.54)+0.75)-171完成从0到-42.75再到0的过程

以此类推,算法的图形如下

路径示意图

1
2
3
4
01-11 00:29:10.884 12553-12553/com.daimajia.androidanimations I/DropOutAnimator: result=-13.279831
01-11 00:29:10.900 12553-12553/com.daimajia.androidanimations I/DropOutAnimator: fraction=0.37059042startValue=-171.0fraction=0.0
01-11 00:29:10.902 12553-12553/com.daimajia.androidanimations I/DropOutAnimator: time=444.7085start b=-171.0length c=171.0duration=1200.0
01-11 00:29:10.902 12553-12553/com.daimajia.androidanimations I/DropOutAnimator: result=-3.2075958

以上,就是弹性掉落的动画分析啦

其他的效果也主要是算法的分析:

t=f*duaration 时间

b表示起始值

c表示总的变化幅度

d表示持续时长

1
2
3
4
@Override
public Float calculate(float t, float b, float c, float d) {
return c*((t=t/d-1)*t*t*t*t + 1) + b;
}

心脏跳动效果

1
2
3
4
5
6
7
8
9
public class PulseAnimator extends BaseViewAnimator {
@Override
public void prepare(View target) {
getAnimatorAgent().playTogether(
ObjectAnimator.ofFloat(target, "scaleY", 1, 1.1f, 1),
ObjectAnimator.ofFloat(target, "scaleX", 1, 1.1f, 1)
);
}
}

x,y大小的变化组合,1变到1.1再变回1

拉扯效果

1
2
3
4
5
6
7
8
9
public class RubberBandAnimator extends BaseViewAnimator {
@Override
public void prepare(View target) {
getAnimatorAgent().playTogether(
ObjectAnimator.ofFloat(target, "scaleX", 1, 1.25f, 0.75f, 1.15f, 1),
ObjectAnimator.ofFloat(target, "scaleY", 1, 0.75f, 1.25f, 0.85f, 1)
);
}
}

x方向被拉由1变为1.25,弹回0.75,再被拉到1.15,在弹回到1

y方向在x被拉伸的时候,y是缩小的,对应1变为0.75,到1.25,在对应,0.85,再回到1,互补

左右抖动

1
2
3
4
5
6
7
8
9

public class ShakeAnimator extends BaseViewAnimator {
@Override
public void prepare(View target) {
getAnimatorAgent().playTogether(
ObjectAnimator.ofFloat(target, "translationX", 0, 25, -25, 25, -25, 15, -15, 6, -6, 0)
);
}
}

摇晃效果

1
2
3
4
5
6
7
8
public class SwingAnimator extends BaseViewAnimator {
@Override
public void prepare(View target) {
getAnimatorAgent().playTogether(
ObjectAnimator.ofFloat(target, "rotation", 0, 10, -10, 6, -6, 3, -3, 0)
);
}
}

站立效果

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class StandUpAnimator extends BaseViewAnimator {
@Override
public void prepare(View target) {
float x = (target.getWidth() - target.getPaddingLeft() - target.getPaddingRight()) / 2
+ target.getPaddingLeft();
float y = target.getHeight() - target.getPaddingBottom();
//中心点
getAnimatorAgent().playTogether(
ObjectAnimator.ofFloat(target, "pivotX", x, x, x, x, x),
ObjectAnimator.ofFloat(target, "pivotY", y, y, y, y, y),
ObjectAnimator.ofFloat(target, "rotationX", 55, -30, 15, -15, 0)
);
}
}

rotationX表示围绕x坐标的旋转,55到-30表示从屏幕后方往前绕x抽向屏幕前方旋转,几个来回值后,有抖动反弹效果。

悬挂旋转,掉落

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
public class HingeAnimator extends BaseViewAnimator {
@Override
public void prepare(View target) {
float x = target.getPaddingLeft();
float y = target.getPaddingTop();
getAnimatorAgent().playTogether(
Glider.glide(Skill.SineEaseInOut, 1300, ObjectAnimator.ofFloat(target, "rotation", 0, 80, 60, 80, 60, 60)),
ObjectAnimator.ofFloat(target, "translationY", 0, 0, 0, 0, 0, 700),
ObjectAnimator.ofFloat(target, "alpha", 1, 1, 1, 1, 1, 0),
ObjectAnimator.ofFloat(target, "pivotX", x, x, x, x, x, x),
ObjectAnimator.ofFloat(target, "pivotY", y, y, y, y, y, y)
);

setDuration(1300);
}
}

以左上角为中心,SineEaseInOut方式,从0-80-60-80-60,最后在60度的地方,Y轴坐标下降,并且淡出

差不多类似的所以,这些可爱的组合和算法是怎么想出来的啊,真棒!!!

svg路径动画

Svg 路径动画效果

项目地址:Github

动画效果:

svg path动画

什么是svg

Scalable Vector Graphics (SVG)可缩放的矢量图形,应用几个定义wiki

  • SVG 指可伸缩矢量图形 (Scalable Vector Graphics)
  • SVG 用来定义用于网络的基于矢量的图形
  • SVG 使用 XML 格式定义图形
  • SVG 图像在放大或改变尺寸的情况下其图形质量不会有所损失
  • SVG 是万维网联盟的标准
  • SVG 与诸如 DOMXSL 之类的W3C标准是一个整体

PathView做了什么

通过XML方式获取svg资源id

1
2
3
4
5
6
7
<com.eftimoff.androipathview.PathView
android:id="@+id/pathView"
android:layout_width="350dp"
android:layout_height="350dp"
app:svg="@raw/monitor" //在res/raw文件夹中,
app:pathColor="@android:color/white"
app:pathWidth="2dp"/>

在onSizeChanged的时候加载svg文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
@Override
protected void onSizeChanged(final int w, final int h, int oldw, int oldh) {
super.onSizeChanged(w, h, oldw, oldh);

if (mLoader != null) {
try {
mLoader.join();//必须在loader运行完毕后,再运行
} catch (InterruptedException e) {
Log.e(LOG_TAG, "Unexpected error", e);
}
}
if (svgResourceId != 0) {
mLoader = new Thread(new Runnable() {
@Override
public void run() {

svgUtils.load(getContext(), svgResourceId);通过svg包加载资源文件到 mSvg

synchronized (mSvgLock) {
width = w - getPaddingLeft() - getPaddingRight();
height = h - getPaddingTop() - getPaddingBottom();
paths = svgUtils.getPathsForViewport(width, height);
updatePathsPhaseLocked();
}
}
}, "SVG Loader");
mLoader.start();
}
}

其中load主要是加载资源文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
/**
* Loading the svg from the resources.
*
* @param context Context object to get the resources.
* @param svgResource int resource id of the svg.
*/
public void load(Context context, int svgResource) {
if (mSvg != null)
return;
try {
mSvg = SVG.getFromResource(context, svgResource);//通过 com.caverock.androidsvg api加载
mSvg.setDocumentPreserveAspectRatio(PreserveAspectRatio.UNSCALED);
} catch (SVGParseException e) {
Log.e(LOG_TAG, "Could not load specified SVG resource", e);
}
}

getPathsForViewport,将

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
/**
* Render the svg to canvas and catch all the paths while rendering.
*
* @param width - the width to scale down the view to,
* @param height - the height to scale down the view to,
* @return All the paths from the svg.
*/
public List<SvgPath> getPathsForViewport(final int width, final int height) {
final float strokeWidth = mSourcePaint.getStrokeWidth();
Canvas canvas = new Canvas() {
private final Matrix mMatrix = new Matrix();

@Override
public int getWidth() {
return width;
}

@Override
public int getHeight() {
return height;
}

@Override
public void drawPath(Path path, Paint paint) {
Path dst = new Path();

//noinspection deprecation
getMatrix(mMatrix);
path.transform(mMatrix, dst);//将path中的点进行mMatrix变化后,并将最终的path写到dst path中
paint.setAntiAlias(true);//抗锯齿
paint.setStyle(Paint.Style.STROKE);
paint.setStrokeWidth(strokeWidth);
mPaths.add(new SvgPath(dst, paint));
}
};

rescaleCanvas(width, height, strokeWidth, canvas);

return mPaths;
}

rescaleCanvas:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
/**
* Rescale the canvas with specific width and height.
*
* @param width The width of the canvas.
* @param height The height of the canvas.
* @param strokeWidth Width of the path to add to scaling.
* @param canvas The canvas to be drawn.
*/
private void rescaleCanvas(int width, int height, float strokeWidth, Canvas canvas) {
if (mSvg == null)
return;
final RectF viewBox = mSvg.getDocumentViewBox();

final float scale = Math.min(width
/ (viewBox.width() + strokeWidth),
height / (viewBox.height() + strokeWidth));

canvas.translate((width - viewBox.width() * scale) / 2.0f,
(height - viewBox.height() * scale) / 2.0f);
canvas.scale(scale, scale);

mSvg.renderToCanvas(canvas);
}

对于Matrix的学习扩展,请参阅Matrix源码详解

matrix原理

动画影响的属性是percentage

1
2
3
4
5
6
7
8
/**
* Default constructor.
*
* @param pathView The view that must be animated.
*/
public AnimatorBuilder(final PathView pathView) {
anim = ObjectAnimator.ofFloat(pathView, "percentage", 0.0f, 1.0f);
}

通过影响percentage的值,从而影响

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
/**
* Animate this property. It is the percentage of the path that is drawn.
* It must be [0,1].
*
* @param percentage float the percentage of the path.
*/
public void setPercentage(float percentage) {
if (percentage < 0.0f || percentage > 1.0f) {
throw new IllegalArgumentException("setPercentage not between 0.0f and 1.0f");
}
progress = percentage;
synchronized (mSvgLock) {
updatePathsPhaseLocked();//更新path
}
invalidate();再重新绘制
}

根据progress,更新svgPath

1
2
3
4
5
6
7
8
9
10
11
12
13
14
/**
* This refreshes the paths before draw and resize.
*/
private void updatePathsPhaseLocked() {
final int count = paths.size();
for (int i = 0; i < count; i++) {
SvgUtils.SvgPath svgPath = paths.get(i);
svgPath.path.reset();
svgPath.measure.getSegment(0.0f, svgPath.length * progress, svgPath.path, true);
//Given a start and stop distance, return in dst the intervening segment(s). If the segment is zero-length, return false, else return true. startD and stopD are pinned to legal values (0..getLength()). If startD <= stopD then return false (and leave dst untouched). Begin the segment with a moveTo if startWithMoveTo is true
// Required only for Android 4.4 and earlier
svgPath.path.rLineTo(0.0f, 0.0f);
}
}

从而实现,path动画