1.private void animateRotation(int degrees, float durationOfAnimation){
long startTime = SystemClock.elapsedRealtime();
long currentTime;
float elapsedRatio = 0;
Bitmap bufferBitmap = carBitmap;
Matrix matrix = new Matrix();
while (elapsedRatio < 1){
matrix.setRotate(elapsedRatio * degrees);
carBitmap = Bitmap.createBitmap(bufferBitmap, 0, 0, width, height, matrix, true);
//draw your canvas here using whatever method you've defined
currentTime = SystemClock.elapsedRealtime();
elapsedRatio = (currentTime - startTime) / durationOfAnimation;
}
// As elapsed ratio will never exactly equal 1, you have to manually draw the last frame
matrix = new Matrix();
matrix.setRotate(degrees);
carBitmap = Bitmap.createBitmap(bufferBitmap, 0, 0, width, height, matrix, true);
// draw the canvas again here as before
// And you can now set whatever other notification or action you wanted to do at the end of your animation
}
该方法摘录 图片300*300以内
2.
tv.setText("Your Number Is..."+ random, TextView.BufferType.SPANNABLE ); Spannable myText = (Spannable) tv.getText(); myText.setSpan(new StyleSpan(android.graphics.Typeface.BOLD_ITALIC),0,myText.length(),0);
final Intent intent = new Intent(Intent.ACTION_MAIN, null);
intent.addCategory(Intent.CATEGORY_LAUNCHER);
final ComponentName cn = new ComponentName("com.android.settings","com.android.settings.fuelgauge.PowerUsageSummary");
intent.setComponent(cn);
intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
startActivity( intent);
ComponentName 两个参数一个是包名 一个是包下的主类
2
Uri uri = Uri.fromParts("package",“Your Package name here”, null);
Intent deleteIntent = new Intent(Intent.ACTION_DELETE, uri);
startActivity(deleteIntent);
纹理(Texture)
java.lang.Object
|
+--javax.microedition.m3g.Object3D
|
+--javax.microedition.m3g.Transformable
|
+--javax.microedition.m3g.Texture2D
An Appearance component encapsulating a two-dimensional texture image and a set of attributes specifying how the image is to be applied on submeshes. The attributes include wrapping, filtering, blending, and texture coordinate transformation.
Texture image data
The texture image is stored as a reference to an Image2D. The image may be in any of the formats defined in Image2D. The width and height of the image must be non-negative powers of two, but they need not be equal. The maximum allowed size for a texture image is specific to each implementation, and it can be queried with Graphics3D.getProperties().
Mipmap level images are generated automatically by repeated filtering of the base level image. No particular method of filtering is mandated, but a 2x2 box filter is recommended. It is not possible for the application to supply the mipmap level images explicitly.
If the referenced Image2D is modified by the application, or a new Image2D is bound as the texture image, the modifications are immediately reflected in the Texture2D. Be aware, however, that switching to another texture image or updating the pre-existing image may trigger expensive operations, such as mipmap level image generation or (re)allocation of memory. It is therefore recommended that texture images not be updated unnecessarily.
Texture mapping
Transformation
The first step in applying a texture image onto a submesh is to apply the texture transformation to the texture coordinates of each vertex of that submesh. The transformation is defined in the Texture2D object itself, while the texture coordinates are obtained from the VertexBuffer object associated with that submesh.
The incoming texture coordinates may have either two or three components (see VertexBuffer), but for the purposes of multiplication with a 4x4 matrix they are augmented to have four components. If the third component is not given, it is implicitly set to zero. The fourth component is always assumed to be 1.
The texture transformation is very similar to the node transformation. They both consist of translation, orientation and scale components, as well as a generic 4x4 matrix component. The order of concatenating the components is the same. The only difference is that the bottom row of the matrix part must be (0 0 0 1) in case of a node transformation but not in case of a texture transformation. The methods to manipulate the individual transformation components of both node and texture transformations are defined in the base class, Transformable.
Formally, a homogeneous vector p = (s, t, r, 1), representing a point in texture space, is transformed to a point p' = (s', t', r', q') as follows:
p' = T R S M p
where T, R and S denote the translation, orientation and scale components, respectively, and M is the generic 4x4 matrix.
The translation, orientation and scale components of the texture transformation can be animated independently from each other. The matrix component is not animatable at all; it can only be changed using the setTransform method.
Projection
The texture transformation described above yields the transformed texture coordinates (s', t', r', q') for each vertex of a triangle. The final texture coordinates for each rasterized fragment, in turn, are computed in two steps: interpolation and projection.
1). Interpolation. The per-vertex texture coordinates are interpolated across the triangle to obtain the "un-projected" texture coordinate for each fragment. If the implementation supports perspective correction and the perspective correction flag in PolygonMode is enabled, this interpolation must perform some degree of perspective correction; otherwise, simple linear interpolation may (but does not have to) be used.
2). Projection. The first three components of the interpolated texture coordinate are divided by the fourth component. Formally, the interpolated texture coordinate p' = (s', t', r', q') is transformed into p'' = (s'', t'', r'', 1) as follows:
p'' = p'/q' = (s'/q', t'/q', r'/q', 1)
Again, if perspective correction is either not supported or not enabled, the implementation may do the projection on a per-vertex basis and interpolate the projected values instead of the original values. Otherwise, some degree of perspective correction must be applied. Ideally, the perspective divide would be done for each fragment separately.
The r'' component of the result may be ignored, because 3D texture images are not supported in this version of the API; only the first two components are required to index a 2D image.
Texel fetch
The transformed, interpolated and projected s'' and t'' texture coordinates of a fragment are used to fetch texel(s) from the texture image according to the selected wrapping and filtering modes.
The coordinates s'' and t'' relate to the texture image such that (0, 0) is the upper left corner of the image and (1, 1) is the lower right corner. Thus, s'' increases from left to right and t'' increases from top to bottom. The REPEAT and CLAMP texture wrapping modes define the treatment of coordinate values that are outside of the [0, 1] range.
Note that the t'' coordinate is reversed with respect to its orientation in OpenGL; however, the texture image orientation is reversed as well. As a net result, there is no difference in actual texture coordinate values between this API and OpenGL in common texturing operations. The only difference arises when rendering to a texture image that is subsequently mapped onto an object. In that case, the t texture coordinates of the object need to be reversed (t' = 1 - t). If this is not done at the modeling stage, it can be done at run-time using the texture transformation. Of course, the whole issue of texture coordinate orientation is only relevant in cases where existing OpenGL code and meshes are ported to this API.
Texture filtering
There are two independent components in the texture filtering mode: filtering between mipmap levels and filtering within a mipmap level. There are three choices for level filtering and two choices for image filtering, yielding the six combinations listed in the table below.
Level filter Image filter Description OpenGL equivalent BASE_LEVEL NEAREST Point sampling within the base level NEAREST BASE_LEVEL LINEAR Bilinear filtering within the base level LINEAR NEAREST NEAREST Point sampling within the nearest mipmap level NEAREST_MIPMAP_NEAREST NEAREST LINEAR Bilinear filtering within the nearest mipmap level LINEAR_MIPMAP_NEAREST LINEAR NEAREST Point sampling within two nearest mipmap levels NEAREST_MIPMAP_LINEAR LINEAR LINEAR Bilinear filtering within two nearest mipmap levels (trilinear filtering) LINEAR_MIPMAP_LINEAR
Only the first combination (point sampling within the base level) must be supported by all implementations. Any of the other five options may be silently ignored.
M3G限制纹理图片的长和宽都必须是2的次方,长和宽可以不一样。
在程序运行过程中,不建议修改与Texture2D对象关联的Image2D引用。
注意:纹理坐标(uv)与顶点坐标系的习惯略有不同,其原点在图片的左下角,左至右为正u方向,底至顶为正v方向。
完整形式的纹理坐标可以表示为(s,t,r,q),其中(s,t)对应一般三维建模软件中的uv也就是平面纹理图片的(x,y);r在使用三维纹理时使用,因为JSR184不支持三维纹理,所以可能会被忽略,此时会使用默认值0;q为齐次坐标,通常为1。
纹理贴图有固定模式WRAP_CLAMP和重复模式WRAP_REPEAT
对于平面四个点的纹理坐标定义如下:
short[] texCoords = new short[]{ 0,2, //左下 2,2, //右下 2,0, //右上 0,0 //左上 };
使用的纹理贴图:
对于固定模式WRAP_CLAMP,其效果如下:
texture.setWrapping(Texture2D.WRAP_CLAMP,Texture2D.WRAP_CLAMP);
若使用重复模式WRAP_REPEAT:
texture.setWrapping(Texture2D.WRAP_REPEAT,Texture2D.WRAP_REPEAT);