Prime Eval, Part 6


One of the things that’s weird about n-D space (where n is 1 or greater) is that time is required regardless of whatever value “n” takes on, in order to allow motion. It’s almost as if time is a liquid form of the next dimension up (n+2). And, there’s a relationship between “n” and “n+1” that I haven’t rambled on about yet.

The last few blog entries have focused on the idea of adding one more dimension to whatever set-up we were looking at, at the time. But we can go the other way by taking cross sections. We take cross sections all the time, either with CAT scans in order to build up a full 3D image of the human body, core samples of the earth and trees, and when we saw through wood. In fact, a cross section of a tree is almost like looking at frozen time. The thickness and spacing between rings tells us how much rain, sun and nutrients the tree received each year, and we can go back and review that as either a line (core sample) or a plane (cutting across the tree).

However, this raises a serious question – how do we decide on where along the 3D space represented by the tree (the height) do we take our 1D or 2D sample? Obviously, being closer to the ground is going to give us different data than if we go up into the branches. It’s still a cross section, and it’s still the same tree, but the results are going to be wildly different.

Here’s another example.

Say you have a flat plane, and two people standing in the middle of nothingness in this plane. Person A and person B. In the left diagram, there’s nothing for A and B to stand on so they just remain where they are and there’s no growth. In the second diagram from the left, A and B both get to stand on little line segments. This gives them some wiggle room to move along, but they still can’t travel far enough to meet up. In the third diagram, B’s line segment gets longer, and he (or she) decides to travel along it to the opposite end. Then, in the last diagram, the long segment gets disrupted and turns into 2 smaller segments, with the result that B is now farther away and has no way of getting back to where he had been before. Let’s pretend that all four diagrams are cross sections of the same 2-D object, which is perpendicular to the plane A and B are on, and that by moving the object up and down, A and B are both subjected to varying “realities”; i.e. – their ability to travel, and the line segments that they see will change. The choice of “height” for the object controls the cross section we get. But, if we could somehow “layer” all of the cross sections together we’d be able to tell what that object looks like in 3-D space; or, what it “is”.

Let’s start over, this time with a cross section of a truly 3-D object as seen within our 2-D plane. It’s the same object, but with a width as well as a length. In the left diagram, we get a circle and A and B are standing opposite each other. On this circle, one or the other of them could decide to walk over to join the other person. In the second diagram, the ring turns into a wall, and neither person can look past it to see the other one. To get to the other side, they either have to walk around the outer perimeter; OR, if they’re unlucky, B had been on the wrong side of the ring when it started getting thicker and is now on the inner perimeter with absolutely no way to reach A because the wall is in the way. In the third diagram, B’s side of the wall has sprouted a projection of some type that pushes B farther away from A. Then, in the fourth diagram the projection breaks off from the wall, leaving B stranded. Again, it’s the same object in all 4 drawings, just with different “heights” in determining where to take the cross section. If we layered them all together over time, we’d be able to take our 2-D world and “see” what it looks like in 3-D space.

In fact, it’s a coffee mug. If the 2-D cross section is taken with z at about 0, what A and B will see is a solid disk, and they can travel along the circumference to join up. But, if we limit them to a 1-D cross section of the base of the mug, they just see a single long line segment that they can walk along.

Which brings me to the “frame of reference”. The assumption would be that the coffee mug is sitting flat on the table and we’re just traveling “straight up and down” along the z axis for these cross sections. But, what if that’s not true? What if the mug is sitting at an angle? One possible cross section is what we’d get in the right drawing – a kind of C-shaped wall that we could walk around.

And then we can get into the entire concept of which direction is “x”, which is “y” and which is “z”? If we have a donut (center drawing), laying it flat could give us the cross section on the left. But, standing it up and down could give us the drawing on the right, if “z” is only a few centimeters up. All of these objects still look the same if we layer the 2-D cross sections together to get a 3-D object, it’s just that they’re rotations of the same thing.

If we stand the donut on edge and vary “z”, we can get something of an oval, two separated circles, or two ovals almost, but not quite, touching.

Why go through all this? Because if each higher dimension is a “fluid representation of space” with respect to time, we have something we can worth with to project “n” space to “n+1”. That is, a point plus time, if layered together over a long enough period of time is going to give us our 1-D line segment. That segment plus time, when layered will give us a 2-D ring. That ring + time, when layered will give us a 3-D coffee cup…

In order to make the jump to the next level, we need to observe something that varies with time. Just moving the mug on the table isn’t going to be enough to say “here’s where we’re taking the cross section”. But, there are examples of “frozen time” in 2-D, such as with tree core samples. What would give us “frozen time” in three dimensions? Because, this really would be what would allow us to make hyperspace jumps across the galaxy. Just as with A and B standing on a cross section of the coffee mug, the value of z controls their ability to easily “cover great distances” to meet up. If B is stranded on the handle of the mug with z half way up the mug, too bad. But, move z closer to 0 and B can not only reach the main body of the mug, but the perimeter distance may be smaller when person B walks over to person A.  A hyperspace jump could become feasible if: 1) We know what shape the universe is in 4-D space; 2) We know how it’s rotated (the reference frame); 3) We can figure out how to manipulate “z” (AKA: D4).

(Hint: Gravity wells.)

Advertisements

Bokaro P ni Naritai, vol. 30



(Images used for review purposes only.)

I want to be a Vocaloid Producer, vol. 30, 1,500 yen, plus tax.
Well, this is it. Volume 30. I’m not sure exactly what I was expecting, other than a whole bunch of extra MMD model files. I’m not exactly feeling a letdown, but I am surprised that the past year went by so fast, without my even being able to generate part of a video on my own. I blame my schedule and the need for being able to eat periodically. With luck, I can get a bit more done with Vocaloid in the next year. But still… sigh.

New magazine features:
The 4-panel comic has Rana passed out in bed with the teachers sitting next to her, worried. They attribute her condition to her coming down with a cold. At the same time, Rana is dreaming about graduating, and upgrading from Rana 0.5 to Rana 1.0. She imagines that she’s tall and thin now. Robo-panda asks why she’s smiling in her sleep, and Satchan guesses that she’s dreaming about eating lots of sweets. In the classroom section, Rana has recovered, and Robo-panda and the other teachers comment about how she’s learned so much in the course, then go on to describe Vocaloid Editor 4.0, which is available for a 5,000 yen ($45 USD) upgrade fee. The section ends with the teachers offering to let Rana stay at the school as a research assistant. She agrees, then finds that she has to start her new studies right away. There’s also a brief description of the winter coat model file for MMD.


(Diploma.)

The genre this time is “Everyone sings pop”, and the interview is with 40mP. The MMD tutorial shows how Cort did some of the effects for the demo video in a little more detail. Two pages talk about the Vocaloid 4 Editor, showing how to do the installs, and giving some information on the Vocaloid 3 and 4 Libraries. The last two pages are just screenshots of 78 music videos that had been submitted by the readers. The back page is a “diploma” that you can fill in, cut out and frame if you want.

New DVD Features:
Again, no pick-up artist.
There’s the winter jacket model file for MMD, Cort’s finished demo file that had been part of the tutorials for the last 8-9 issues, and installation mirrors for the newest versions of SSW and Vocaloid.


(Vocaloid 4 Editor.)

Tutorials:
Vocaloid:
Time for wrap-up. The tutorial just covers some remaining hints for creating melodies. The first suggestion is to start by writing your lyrics in the key of C. Afterward, you can simply select the entire song and use the up and down cursor keys to transpose the music until it sounds good. Then, tweak the timing. If a phoneme would sound better if held longer, drag the lyrics around as needed and stretch out the target phoneme as much as you want. The tutorial ends with “just experiment, have fun, and make lots of Vocaloid songs.”

SSW:
Here, the editors want to discuss a few extra features in SSW, including tempo changing, and modifying the note positions for MIDI tracks (key shifting). From the Options menu, you can also select Arrange->Chord Settings, to display the chord names, which you can then hand edit. The tutorial ends with a playback of the demo song “Kimi to Dokomade mo” (“I’ll go anywhere with you”), a peppy little marching piece that’s very reminiscent of the Tonari no Totoro (My Neighbor Totoro) opening song.


(Winter coat Rana model, plus the ice stage.)

MMD:
The tutorial starts out recapping Cort’s suggestions for making videos, then repeats the steps for uploading the finished video to Nico Nico Doga, with hints about what to put in the title, tag and description fields. This is followed by suggestions for reaching a wider audience, which includes being the first person to comment on your own video, and announcing the link on Twitter or Facebook, as well as participating in Vocaloid online events (such as celebrating Rana’s birthday). Each of the last 10 MMD tutorials are summarized in a vertical scroll. And finally, we get Cort’s finished video, and a remake of “799” using Rana’s new winter code model file.

Additional comments:
It’s hard to say whether this series was worth the money for me. There’s definitely a lot of information here, and all three of the applications – Vocaloid, Singer-Song Writer and Miku Miku Dance are huge. Lots of features. But, I really have to sit down with them for a couple uninterrupted months to see what I can accomplish with them. I’ve mentioned before that MMD produces really slow animations on my laptop. I don’t know if it’s because of old drivers, limited memory, or what, but I’m pretty sure I need to upgrade to a better desktop machine to make dance videos. I do like SSW a lot, and I plan to so some work with Vocaloid soon. But, now that the series is over I do have to decide what to run next on this blog.

In the meantime, here’s a capsule ball figure of Miku Hatsune.

Finally!


There are signs of a new Gakken kit! There’s nothing mentioned on the Otnona no Kagaku site, or on the Facebook page. But, Amazon.jp has the artwork for the Tornado Maker kit cover and a Nov. 12 release date for accepting preorders. Priced at 3,780 yen, which is getting up at the high end of “affordable” for something like that only has one function and can’t easily be repurposed (from what I can tell right now). Still, I’ll probably buy it when it gets to Kyushu (assuming the release date doesn’t slip…)

Prime Eval, Part 5


Electronics makes extensive use of imaginary numbers. They’re really not “imaginary” – they do exist in the real world, they’re just phase-shifted by 90 degrees. And this is where language colors perception again. Our ancestors were thinking linearly in one dimension, and didn’t account for situations where two things could happen at the same time but in different directions. Additionally, when I was in high school, I was taught that proper functions drawn on graph paper couldn’t have 2 simultaneous values for y. Or rather, curves couldn’t loop on themselves. Meaning you weren’t supposed to draw circles on the graph because it wasn’t a “proper” function. I’m pretty sure those days are long in the past now, but we still use “real” and “imaginary”, and we shouldn’t. Especially since in 3D space we have 3 sets of numbers – real (x-axis), imaginary (y-axis) and up/down (z-axis)…

As I mentioned in my earlier blog series, “i” is how we change directions in 2D space, switching from the x- to the y-axis and back. Or, rather from the “real” to the “imaginary” axes, when we have an equation in the form of y = a + i*b.

Say we have a boat trying to cross a river. The boat is going from east to west at 10 miles per hour, while the river is flowing north to south at three miles per hour. If the river is 200 feet wide, how far downstream will the boat go before hitting the opposite shore? It’s easier to work in meters. 10 miles/hour = 16093 meters/hour, or 4.47 meters per second. 3 miles/hour = 1.49 meters/second. 200 feet = 60.96 meters. In the first second, the boat will go diagonally 4.47 meters, placing us 1.49 meters downstream. Straight east-west, we’ve gone sqrt(4.47^2 – 1.49^2) = 4.21 meters. At this rate, it will take us 60.96/4.21 = 14.46 seconds. We hit the opposite shore 14.46 * 1.49 = 21.5 meters farther downstream than when we started. Which part of this scenario is “real” and which is “imaginary”? (To double check the numbers, 64.65^2 = 4180.63. And 21.55^2 + 60.96^2 = 464.51 + 3716.12 = 4180.63.) If “y” is our position in the river at any given time, “t”, and downstream is “minus y”, then y = 4.21*t – i*1.49*t. Or, y = (a + i*b) * t, where a = 4.21 and b = 1.49. Both “a” and “b” are real in that we can measure them in 3-D space, it’s just that we need to use “i” to change directions along the way. That is, “i” gives us a phase shift (and is often simply called a placeholder).

In 2D space, 1 * i = i
i * i = -1
-1 * i = -i
-i * i = 1

What if we step into 3D space? We need a second placeholder. Instead of being in a boat, let’s use a glider. We have a tailwind for a, a crosswind for b, and gravity for c. Starting with the glider on the side of a 50-meter tall cliff, where will we land if we glide 10 meters forward for every meter we drop, and there’s a 2 meter/s crosswind? Same math. How do we change axes?

In 3D space, we could do something like y = a + i*b + j*c
The rule would be:
1 * i = i
i * i = -j
-j * i = -ji
-ji * i = j*j
j * j = -1

We’re now moving around in a unit sphere. And you know what? We can do this in 4D space, too. j*j = -k. k*k = -1. As long as we keep enough spare placeholders in our alphabet, we can extend the math into as many dimensions as we want (which brings us perilously close to string theory…)

Somehow, as the math implies, it should be possible to keep taking right angle turns (a point to a line, a line to a plane, a plane to a solid, a solid to a hyper-solid). Is there an upper limit to the number of dimensions that do exist? (Note that I use “do”, not “can”.) And, what is a “dimension”, really?

When we graph an equation, such as y = a*x + b, or y = 2 * x^2, we’re making some assumptions. Mainly, that either the units don’t matter, or that they work out right. A simple line, y = 2 * x + 4, has an implied unit of “1”. For every “1” of whatever it is that x is, y will increase by “2” of the same thing. If “x” is apples, then if x increases by 1 apple, y increases by 2 apples.

Let’s look back at the boat example. The boat had a velocity of 4.47 meters/second. The wind had a velocity of 1.49 meters/second. The river had a width of 60.96 meters. And “t” had a time in seconds. y = 4.21*t – i*1.49*t. Technically, this should have been y = (4.21m/s – i*1.49m/s) * t. Then, if t = 5 seconds, y = 20.15ms/s – i*7.35ms/s. The seconds cancel and y is the positional distance from the shore, as 20.15m – i*7.35m. In this case, if we plot the boat’s position on a map, the x-axis and y-axis (latitude and longitude on the map) will both be marked off in 1 or 10 meter increments, and we’ll have multiple data points, one each for each second the boat is in the water. However, if we want to plot “y” as a function of time, then maybe it’s more convenient to make the x-axis “t” in seconds, the y-axis as meters/second for boat speed, and z-axis as meters/second for wind speed (actually, it’d be easier to use vector math, or just draw points on the map of the river using different representational curves).

But, regardless, in one case we have meter – meter axes, and another we have seconds – meter/s – meter/s. We could have dollars/second, or pounds of colored dye/batch size (if we’re making hair dye). What’s the difference between an axis plotting factory production costs, and 3D spaces?

Bokaro P ni Naritai, vol. 29



(Images used for review purposes only.)

So close… getting so close. Just one more issue after this one and the series wraps up. I really do want to mail in the proof of purchase seals and try to get the process started for obtaining the commercial versions of the serial numbers, and to see if maybe the publishers will send me any of the goodies that the people who paid in advance for the full subscription received (like the seals and special MMD model files).

I want to be a Vocaloid Producer, vol. 29, 1,500 yen, plus tax.
New magazine features:
I need to start with the classroom section first this time. Rana has received a lot of information over the last year, and she’s complaining that it’s like drinking water from a fire hose. Meaning, she’s having a hard time deciding what kind of music to write and videos to produce. So Robo-Panda provides a 5-step summary. 1) Think about what it is you want to make. What kind of music do you like, and is that what you want to create yourself? 2) Listen to the music that you like and try to get ideas from the artists already producing that music. 3) Decide the genre and instruments you want to use. 4) Set a deadline and keep to it. 5) Release the song when the deadline arrives. At the end of the section, we’re directed to read the 4-panel comic, in which Rana has finished her graduation project and the teachers all congratulate her for it. As she accepts the praise, she whispers “I can do anything by myself now”, and then keels over and passes out while the teachers all go into a panic.


(Alice and Azuki.)

The music genre is comedic Trans-Pop and the interview is with Utata. The SSW tutorial includes screen caps of the mixer screens (for the GraphicEQ, Compressor, Distortion, Stereo Enhancer and Delay options) showing the settings used on all three vocal tracks of the demo song. There’s a mention of a new musical instrument model for MMD, a combo DJ scratch turntable/synth keyboard/trumpet. Plus there’s instructions in the MMD tutorial for using AVIUtil and installing the Ut Video codec. There’s two pages on motion capture; introducing some demo motion files for MMD; and running a short interview with the company that made the demo motion file – CGCG; plus a fake introduction of two of CGCG’s mascot characters – Alice and Azuki (shown above).

New DVD Features:
Again, no pick-up song this time.
However, there are the demo files for Cort’s instructions on AVIUtil, and using the motion capture data from CGCG to make a complete music video. Plus, the model data for the hybrid synth, the “Colorful Pin-key”.


(Rana posed with the new Colorful Pin-key synth, with the island stage.)

Tutorials:
Vocaloid:
More lecture, although you can follow along and change the demo song file if you want, but there’s not much point unless you specifically need this section. The featured technique is changing tempo, which is really very simple. Just bring up the Tempo menu, specify the point in the song you want to change, enter a new value for BPS (beats per second) and click ok. The editors focus mainly on the idea of a “story song”, where the tempo changes when the singer changes scenes in the story. The demo song is “Koufuku no Mi” (The Seed of Happiness), which starts out like a slow folk song, then jumps into a higher gear. Otherwise, not really that outstanding a piece. The music’s good, I’m just not that taken by the insistence on having Rana sing in such a high register.

SSW:
The idea here is to take Rana’s vocal track (29:Vox), break it up into pieces at points 10.2 and 16.4, copy the track and rename it so you get 29:Vox1, 30:Vox2 and 31:Vox3, then delete the unnecessary duplicated sections on each track (so that Rana sings up to 10.2 on Vox1, from 10.2 to 16.4 on Vox2, and from 16.4 to the end of the song on Vox3). One interesting thing I hadn’t known before is that when you open the Mixer window, you can save and load Graphic EQ and Compressor settings to and from data files. So, we save the 29:Vox1 settings and load them into 30:Vox2 and 31:Vox3. The editors want a “lo-fi” effect on the intro vocals, so the Vox1 GraphicEQ, Compressor and Distortion settings are changed to be High Pass, with high Q values at the upper end of the frequency band. This gives a really cheap, tinny speaker effect to the track. The video cheats by skipping over the instructions on Vox2 and Vox3, instead directing the student to load the finished song files and check the settings the editors used. Granted, the effects settings are given in the magazine, so it’s not that big a deal. However, they do go into a bit more detail on what the Delay function is and how it works. The video ends with a playback of the final song.


(Mocap Rana in the school hallway stage.)

MMD:
We’re moving out of the realm of MMD activity and relying on AviUtil more. The idea is to introduce scene transitions, so save the finished animation of just the character for the frames for each specific scene (using a file naming convention like 0-1099_Character, and 0-1099_background) for the entire video. Once you have this in place, you can use AviUtil to apply flare, shading, and blur effects to the Rana layer. The video skips over the details, referring instead to a separate tutorial video on the DVD-ROM. It finishes with excerpts of the second tutorial overlapped with the motion capture dance for the “Firecracker” demo video.


(Screen cap from the AviUtil Sugu Dekiru tutorial video.)

AviUtil Sugu Dekiru (I can use AviUtil quickly):
This is Cort’s tutorial for using AviUtil with the sample files on the DVD-ROM to apply the effects described in the MMD tutorial. It does look like this is a really powerful application, but I’m still hesitant to download it and put it on my PC. Maybe some day in the near future, but not right now. On a side note, this tutorial doesn’t have a soundtrack; it’s just the video instructions.


(Screen cap from the “What is Motion Capture” demo video. Note the insert with the live dancer in the lower right corner.)

What is Motion Capture?:
There’s an extra folder on the DVD-ROM that contains the MMD motion files for Rana to create the “Firecracker” demo video, plus a few background .pngs and the song .wav file. The instructions for making the video are in the magazine. But, there’s a video file in the folder as well that’s worth watching. It has the finished “Firecracker” music video, plus an insert in the lower right corner with the dancer who was used to generate the capture data. So, you can see the dancer in the capture suit, and compare that with Rana as she’s doing the same moves. It’s obvious that there’s been tweaking of the hands and facial expressions, but otherwise Rana’s movements are pretty faithful to the human dancer she’s mimicking. It’s fun to watch.

Additional comments:
I’m getting a bit burned out on the Vocaloid side, because so much of it is just music theory that I’m not ready for. I like the SSW stuff better, because I’m always ready to mess with sound settings to get various effects out of the instruments, so this issue was worth working on. The MMD and “AviUtil Sugu Dekiru” stuff is going way beyond what MMD can do, and is more about post-processing the video files. It’s nice to know material, but as I mention above, I don’t trust third-party executables. The “What is Motion Capture” video is fun, but since MMD doesn’t come with video cameras and a capture suit, it’s not like we can do any of the motion capture work ourselves. On the other hand, we do get the data files on the DVD-ROM to assemble the finished video, so there is a sense of accomplishment if you take the time to do that.

Now, some comments on the MMD projects for this volume. First, I wanted to put together an MMD sample video with Rana and the Colorful Pin-key synth. So, I loaded the Rana model, and followed this with the Pin-key model file. There are two files that are misnamed as “motion” data, when they’re really just fixed poses. One is for Rana so she looks like she’s holding the synth in her hands. The other is for the synth so it’s rotated to look like it’s being held by Rana. The two just don’t line up right when they’re loaded into the project. That was disappointing. I had to spend several minutes repositioning the synth, and even then the strap is too loose over Rana’s shoulders and her hands aren’t located over the keys of the instrument. So, basically the pose files aren’t that useful. I just threw in the island dance stage and took a screen cap of that to show off the synth.

Second, I wanted to put together the demo video for “Firecracker”, using the motion capture data and camera data from the DVD-ROM. This turned out to not be feasible, because all the keyframe data makes for a HUGE file. My laptop just doesn’t have the speed or memory for this, and I had to wait 3-4 minutes just for the data to load. Then, when I added the music track, MMD simply died on me, destroying the model structure and corrupting the avi output file (Windows media player complained about a “bad file”). I had to disable the .wav file, and then the video could be saved to disk, anyway, although it still played back in media player way too slow. I added the school hallway dance stage and output just enough frames to give me something to screen cap for this blog review.

One thing I’m still annoyed by is that I’ve never figured out how to export a .jpg freeze frame from MMD. MMD only seems to allow frame 0 to be saved as a still image, and in every single video I’ve made, frame 0 is solid black. All the other frames output right, so instead of making stills, I’ve always output the first 60-100 frames as an .avi file, then played that in media player so that I can pause on the frame I want, and follow this with PrtScr, and pasting the screen cap into Gimp. I know that it is possible to save to a still, because the editors did that with the Christmas card project in the volume for December last year. I just don’t know HOW they did it.

Anyway, if I ever do create my own videos with Rana, with a soundtrack, I’m going to need to upgrade to a much more powerful desktop, I’m afraid.

Prime Eval, Part 4


In part 3, I jumped from one to two dimensions, and invoked an x-y axis system, thereby kind of getting ahead of myself.

But, when we switch to geometry, we need to deal with shapes, and the first real place we can do that is when we go to a 2D plane. A line segment has no width, so the shape it forms will have no area. Same holds true if we have a shape formed by 2 line segments. It’s when we hit three segments, or a shape with 3 sides, that we can actually say we have a shape and that the shape has an area. I’m pretty sure that the ancient Greeks would like to claim to have made all the discoveries there are about triangles, such as the angles adding up to 180 degrees and all that, and maybe they did (according to the wiki entry, they didn’t, and that’s good enough for me). Anyway, I wasn’t there at the time, so I remain skeptical. But one thing that they (allegedly) observed is that triangles that have one angle of 90 degrees were a lot easier to work with than the other ones are.

Say you have a random triangle and you want to know what its area is. You could try doing all kinds of tricks if you like. But, finding the two lengths created by drawing the perpendicular line on the longest leg so that it intersects the opposite corner will give you two smaller right angle triangles. How do we define “perpendicular”? By saying that the angles created on each side of the line are equal. Since the line of the longest leg is straight, it represents a 180-degree angle at the start. The only way the two angles on each side of the perpendicular line can be equal (in flat 2-D space) is if they’re both 90 degrees. Put four of these angles at the corners of a test object and we get a four-side thingie we call a rectangle. From this, we can see that our random triangle can be divided into two smaller right triangles, which are then each half the areas of the two rectangles formed by multiplying the lengths of each of their sides together.

So, for a random triangle, we can find two smaller right-angle triangles within it. Take the lengths of the two shorter sides for each new triangle and multiply them together and divide by two to get the areas of each right-angle triangle. Then add the two smaller triangle areas together to get the final answer for the area of the original triangle.

Add a side. Get some kind of random 4-sided thingie. What’s the area? Well, draw a line from one corner to the opposite one, then break the two resulting triangles into 4 right-angle triangles. Multiply the two sides of each, add them together, and divide by 2. Doesn’t matter what the initial shape is, if we cut it up so that we have something with 3 sides, and one angle is 90 degrees, the math becomes really simple. And, of course, what I’m leading to is the special case represented by a rectangle, where all four corners are right angles and the 2 pairs of opposing sides are of equal length. Which then ties into the perpendicular lines used to draw the x-y graph. It’s all very arbitrary, but if you ask the question “are mathematicians lazy”, and you get the answer “yes” for at least one of them, then doing the math in Cartesian space with right-angle triangles and rectangles is going to save you a lot of otherwise needless work.

One thing that I hadn’t seen when I was in my high school math class was the following representation of the Pythagorean Theorem. We all know (I hope) that Pythagy (as he was called by his friends, although he kept saying he hated that name) was the first person on permanent record as saying that for a right-angle triangle, the square of the length of the longer leg is the sum of the squares of the two shorter ones. But, what exactly does that mean?

Take a triangle. If we take a second triangle of the same dimensions and flip it around, we’re going to make a rectangle. The area of the rectangle is the product of its length and width. Therefore, the area of the triangle will be one half of the rectangle it forms. We want to connect that area somehow to the length of the longer leg (the hypotenuse). What Pythagy realized was that the two other legs could create squares themselves, and that squares are just a special case of the rectangle.

Digression: Is there a relationship between a rectangle and a square? Well, if we draw out the rectangle on graph paper, and then square both sides, we get something that looks like this: 4 things that when next to each other like this form a bigger square. Say the original rectangle was 4×3. 4×4 = 16. 3×3 = 9. and the rectangle itself shows up twice, as 3x4x2 = 24, for a total area of 49. While the big box now has a side of 7, or the square root of 49… (a+b)*(a+b) = a^2 + 2ab + b^2.

Umm… Maybe. I’d swear I’ve seen that formula before…

End of digression. Anyway, this is what I’ve seen: The areas of the squares created by the two shorter legs added together equals the area of the square created by the hypotenuse. I mean, yeah, it’s just a visual representation of c^2 = a^2 + b^2. But I don’t remember ever seeing that when I was in school. It makes a lot of sense now. I just don’t understand why I can’t remember having seen it when I was in school… On the other hand, Pythagy’s proof was a bit more straightforward to what I’m used to.


(Image from the wiki article)

If you lay out a triangle with sides A and B as shown in the left image, you get a square with a length of C, and an area of C^2. Rearranging the triangles to make the image on the right is functionally equivalent, but it gives you two smaller squares of areas A^2 and B^2. Since the space in white has the same area in both images, it’s apparent that the areas of A^2 plus B^2 are equal to the area C^2. The total space is (a+b)^2 again, and the only difference between what I’d drawn for my rectangle and the Pythagorean Theorem is that we’re subtracting out the 2ab part. That is, the total space in the left image has an area of c^2 + 4*(a*b/2) = c^2 + 2ab. While the total space on the right is a^2 + b^2 + 2*ab. (Just add up each of the squares and triangles). Since the total areas of both images are equal, c^2 + 2*ab = a^2 + b^2 + 2*ab. Or, c^2 = a^2 + b^2.

I’ve been flogging a dead horse here, but I just want to get the basics out of the way before proceeding any further. (Plus, it wasn’t my horse.)

Utsukushii Kiri-e


(All rights belong to their owners. Images used here for review purposes only.)

Utsukushii Kiri-e (Beautiful Kiri-e, 1600 yen, Aug. 2015), by Shinobu Ohbashi
I was looking for a specific kind of kiri-e (paper cutting artwork) pattern recently, so I went to the Junkudo bookstore in Maruya Gardens, and checked out their kiri-e section. I spent over an hour paging through close to 60 books without seeing anything that came close to what I wanted. Most of the patterns were either too simple – flowers and stars – aimed at beginners and younger children, or too elaborate – goth lace designs. I finally settled on Utsukushii Kiri-e, which was just published this August. Utsukushii means “beautiful” or “pretty”, and that pretty much sums up the artwork here.


(Example cutting instructions.)

The book starts out describing Ohbashi’s approach to kiri-e, which is slightly different to mine. He prints the patterns in blue, and then glues the corners of the pattern sheet to the piece of black construction paper underneath. I can see the benefit to this in that staples can catch on the cutting board as you’re rotating the sheet around, and it’s easier to differentiate the pattern from the edge of the base paper if they’re not the same color. (The advantage of staples, though, is that you can always add more staples in places where the paper starts to buckle after you do a lot of cutting.) Also, the author uses a longer cutting blade, more like an x-acto knife. I tried that before, but I find that using a much shorter blade makes it easier to cut along curves, because the blade won’t flex as much that way. He also violates the rule of “cut away from the corners”, by cutting his lines going down into the corner at the end of the cut. I guess that works for him. I follow the “cut starting from the corner” rule, because you’re less likely to get fuzzy pieces of paper sticking out of the corners that way, especially if you overlap your cuts, and the paper is less likely to tear when you pull on it when working on other parts of the design. (It does cause different problems for me, doing that, though.) Finally, he uses a spray adhesive, and I use a simple roll-on glue tube (the same glue as used for sealing envelopes). I really like the idea of using a spray, especially when preparing the finished artwork to be mounted on the backing board.


(Some example patterns. None of these are all that simple.)

The book primarily acts as a showcase for Ohbashi’s finished works, which have been turned into glass display pieces, metal sheet cut-outs, etc. This is fine if you want ideas for presentation, but it makes it harder to use the pictures as patterns yourself. To overcome this drawback to the book, the publishers have put the plain black and white (not blue and white) patterns in Word files on their website for download as password-protected zip files. You need to buy the book in order to get the passwords. I think this approach is pretty good because you don’t have to destroy the book to make the pictures, and the Word files are already formatted to A4 paper size. Just print out the page you want to work on.

The pictures are all fairly elaborate, and feature lots of flower and gem embellishments. Most of the pictures include animals, as shown on the cover. There are also patterns for the Japanese hiragana character set, and the counting numbers 0-9. Unfortunately, the bubbles and petals surrounding the numbers interfere with each other, so they’re not practical for building up entire number strings, like “7” + “1” + “3” (same holds true for the hiragana characters, for that matter). But, if you want to illustrate a page of a book, with one big letter at the top left of the page, then these kiri-e alphanumeric patterns are pretty good.

As I mentioned above, I was looking for a specific pattern and I didn’t really find what I wanted. I settled for a different pattern, and it took me close to 3 full days, working non-stop, to finish it, which came to about 24 hours. I’m not sure if I want to spend that much time on another kiri-e anytime in the near future. But, I do recommend this book if you want ideas for yourself.

Prime Eval, Part 3


By a strange quirk of nature, humans for the most part have 2 hands each, two feet each, and five fingers per hand, 5 toes per foot. Some people may not have noticed this before, which is why I mention it. The rest of us probably started out as children (ok, technically we may have started out as zygotes and then worked our ways up, but a couple people could have skipped a few steps in between; who knows. It IS possible and I wasn’t there to verify everything…) learning to count by using our fingers and toes. And through one set of chance and another set of succumbing to convenience, we settled on a number system using integers from 1 to 9, with a base of 10. As I mentioned in my last blog series, it took a while for the concepts of zero and negative numbers to catch on (where “a while” = a handful of centuries).

Not all societies have fully grasped this concept yet in their written system. Japan uses kanji to represent numbers, but resorts to roman numerals for some math operations. Things get tricky when multiplying by 10. The Japanese words for numbers go “ichi” (1), “juu” (10), “Hyakku” (100), “sen” (1,000), “man” (10,000), “juu-man” (100,000), “sen-man” (1,000,000) and “oku” (10,000,000). Converting between the Japanese and roman systems gets confusing sometimes. Then again, Americans refuse to go metric, so we don’t have a foot to stand on in this argument.

But why? Why integers? Yes, they’re fine if you want to buy a peck of pickled peppers, but so much of what surrounds us can’t be addressed in integer form. As an ESL teacher (English as a second language), I’m constantly talking about count and non-count nouns. The non-counts are the ones that encompass non-integer volumes; i.e. – time, water, news, noise and air pollution. While I can’t say “I have 1 water”, I can say “I have 1.05439 liters of water”. And it’s that part to the right of the “.” that our ancestors should have started with. But I guess that’s what you get when those pesky Mesopotamians back at the beginning of your gene line wanted to be merchants instead of water farmers, creating a system for tracking how many head of sheep and cows were being traded, rather than how much water is under the bridge.

Really, starting from basics, if we have a dimensionless point and we push it in one direction, it takes on one dimension with a really, REALLY small increment size. Let’s pretend that it’s “0.01”, which is good enough for jazz. As we make our line longer, the length will go “very smoothly” (work with me here) from 0.00, to 0.01, 0.02, 0.03, etc., up to infinity. Because the pencil attached to our point hasn’t been sharpened for a while, what we’ll get is a smooth, continuous line, but it’s something with a measurable length. If we jump forward and invoke the memory of Rene Descartes, we will want to put the line on a graph in Cartesian space to see what that length is. And here’s where things get circular… What markings do we put on the x-axis to make the graph useful?

The weird thing about space is that 2 dimensions are better than one. And there is one special number that stands out if you go one more dimension. If you multiply each point of the continuous line against itself, you’re forming squares, and the result of this operation, one side times the other side of the square, gives you the area of that square. Yes, we all know this. It is common sense. The point is that common sense often obscures what we’re looking at. And, there is one very special square where the area is equal to the length of either side (assuming that everything is unitless)

IF we graph our continuous line in 2-D space and check our infinite number of squares, there will only be one that fits our criteria of the length of the side of the square being the same as the area of that square (x = x^2). If we then mark that point on the x-axis as “special unit step size”, we have our graph. Rather than defining “1” as the first integer within the set of all counting numbers, and then being left with the question of what to do with “0”, “-1” and “3.14159…”, we should have gone with a continuous line of infinitely small increments and then captured the value that gives us a unit box when we try to graph it. Then maybe we would have picked better names than “counting”, “rational”, and then “irrational” for numbers that don’t terminate. Because I think that PI is a perfectly rational number.

Bokaro P ni Naritai, vol. 28



(Images used for review purposes only.)

I want to be a Vocaloid Producer, vol. 28, 1,500 yen, plus tax.
New magazine features:
In the 4-panel comic, Rana is having a bad dream about not being able to eat more ice cream, when she’s visited by a shadowy creature that turns out to be  “Rana Version 4.0”. The classroom section talks about the history of Vocalistener, Vocaloid 2 Prologue, and Vocaloid Kaito, then gets into the really big news – that Vocaloid 4 Editor is going to require additional study (the students aren’t done with Vocaloid, yet). And Vocaloid 4 for Rana is going to be an additional 5,000 yen if you upgrade from the versions supplied on the magazine DVDs. Vocaloid Listener is a Job plug-in, and it is also available for V4 Editor. It is used for analyzing a human singer’s voice to convert it for use by the Vocaloid engine. The featured genre is rap, and rather than having one guest artist this time, there are 9 videos suggested as viewing reference. The MMD section has Cort talking about how he applies effects to his videos, with a comparison between the MME effects plug-in and AviUtil. The magazine ends with the introduction of Talkaloid, which allows users to employ the Vocaloid package for producing plain spoken speech. Gobou-P created a PDF user’s manual, called “To-ku Dekiru” (I Can Talk), which is included on this volume’s DVD. There’s a mention of the Rana upgrade for Vocaloid 4 Editor, which will be available from the Vocaloid shop in December. And there’s a sidebar saying that the Joysound Karaoke box chain has started including Vocaloid-produced songs on their playlists.


(Cort’s examples of what not to do.1) Don’t add so many effects that it slows down the computer. 2) Don’t make the effects so strong you can’t see Rana through them.)

New DVD Features:
A few new items this time, although there’s no pick-up artist song again. For MMD, there’s a shader plug-in from Cort, an African savannah dance stage (the last of the numbered stages – #12, and the latest in the “world views” collection) and something called “Rana Dummy bone” (see below). There’s also the “I Can Talk” PDF file, and a couple sample Vocaloid data files demoing Rana simply speaking.

For Talkaloid, one of the sample demo files has Rana giving a fake interview, introducing herself and giving her height, weight and dimensions. The speaking voice used for Rana is set to a fairly high register, which gives a kind of unpleasant tinny, mechanical edge to it. I have been thinking about just using her as a kind of narrator, but I think her voice parameters are going to need to be tweaked a lot to make Talkaloid at all useful (it’s not going to help that Talkaloid doesn’t support English phonemes).


(Example showing the “down v notch” on Volume as described in the SSW tutorial below.)

Tutorials:
Vocaloid:
The focus is on rap, and the point is to try to kind of undo the functionality Vocaloid implements in turning lyrics into music and make them sound more like they’re being spoken. In the demo song, the editors just did a real number on the pitch settings for each phoneme, going in and changing pitch at random. The next step is to adjust pitch bend to make the sounds between phonemes flow together more smoothly. Then, to make the lyrics a little more interesting, the phoneme play times are shortened in a few places get a more staccato effect. This part is more about music theory, explaining why the composer made the choices he did, than it is about how to use Vocaloid. The finished demo song is “Hard Sell of Love”, and it’s got a lot less to do with rap than it does with simple talking lyrics (where the singer takes a conversational tone to explain something to the listener). The only really useful part of this demo is where the editors show the use of “v” up and “v” down waveshapes for pitch and DYN on certain phonemes to get a stronger emotional expression in things like “iyan” (i.e. – the “don’t touch me there” sound used by some Japanese women.)

SSW:
The first couple of minutes of the tutorial is just a playback of the demo song, and identifying which portions are intro, A melody and B melody. Then, we get a bit of a walkthrough to take that “iyan” sound from the Vocaloid tutorial, drop the volume in the middle of the phoneme, and then bring up the mixer to change the Graphic EQ, Reverb and Compression settings on that track. The result is mechanical and tinny, not exactly as sweet as it could be. The rest of the video gets into electronica again, with the idea being to take the synth bass track, open the Alpha3 editor, crank up the cutoff filter and resonance a bit, then map LFO1 to Main Pitch. This last mod makes the note frequency vary with the triangle wave of the LFO. This makes the base synth pretty strange when combined with the rest of the song.


(African dance stage, with Relaxed Rana, and the jumping dance motion file.)

MMD:
Cort’s tips this time regard when and when not to apply effects to the video. This starts out as a series of cautions for pitfalls (or, in Japanese “holes you fall into that you didn’t know were there”). Pitfall #1) The more effects you add, the slower your computer will run. #2) Adding effects can block out your main actor (don’t put flare effects in front of Rana’s face, because you won’t be able to see her anymore). #3) Don’t rely just on Miku Miku Effects (MME), use AviUtil aftereffects, too. MME can be used for shaders, particle effects and post effects. Other movie editor software is somewhat good for particle effects, but especially good for post editing. The choice for when to use each boils down to how much time it takes to apply them to the movie. MME can run slow, and if you make a mistake and have to go back to correct it, it’s going to be time consuming. That’s why other video editing applications can be useful off and on. Most of the tips for using MME are too simple to repeat here, and the section on particles duplicates what’s in the Dummy Bone tutorial. One thing worth mentioning is M4Layer (check the usage video on Nico Douga). M4Layer, as the name implies, can be used to create videos with multiple layers, as in having a background image, the foreground character, and then text scrolling horizontally across the screen. I haven’t used it, but it seems to employ an .inf-style info file for storing your instructions, the text to scroll, and layer effects. Cort mentions that one package he likes now is suibokusan set (ink painting set). The tutorial ends with an example of MME being applied to the sun in the African savanna stage to get a stronger glowing effect.

Dummy Bone:
This is a special case, in that there’s an extra tutorial video with this issue. Dummy Bone is just that. In the MMD modeling system, actors that can move around stage, such as Rana, Robo-panda, Miku Hatsune, etc., are bone-based. That is, all movement actions involve selecting something like the left hand, the right foot, an elbow, and then pulling and/or rotating it. Moving something causes whatever it’s connected to to be pulled along with. So, if you select the model’s right hand and change its position 3 meters to the side, the entire rest of the model is going to be pulled too, making the model look like it’s being yanked through the air. Additionally, there’s something called the “center bone”, which represents the model’s spine. Moving the center bone lets you reposition the model without getting that “drag the rag doll around” effect.

Dummy bone is essentially a simple model with three bones that are not directly connected together, designed to be used with any of the effects plug-ins. The tutorial discusses how to make it look like Rana is surrounded by fireworks sparklers. The idea is to drag the dummy bone model into MMD, and then associate each of the three bones with the particle generator plug-in. If you set the keyframes for the dummy bones and have them rotate upward in a spiral with decreasing radius, you’re going to get sparks forming a kind of Christmas tree shape. But, because there’s no real model data associated with the dummy bones, they are invisible when the movie is rendered – you just get the sparks without any clue as to what is making them. Of course, the dummy bone model can be used with other effects plug-ins as well, depending on what you’re trying to do for your finished video.

Additional comments:
I haven’t played with Talkaloid yet, so I can’t comment on that aspect, but the user PDF shows that it’s just Vocaloid with flatter parameters for brightness and pitch. However, any opportunity to mess with the Alpha3 is a good thing.; so, I like the SSW tutorial just on that count. I don’t have any particular need for the dummy bone model, but it’s nice to know it’s there.

Reply to a Gakken post comment


Last week, when I wrote that there’s been no activity from Gakken for months, and no new kit in a year, Micheal replied with “Most likely they don’t have any new ideas to run with. I would’t mind them coming out with a one tube AM transmitter. Instead those have to modify their existing kit.”

I’d like to use the space here to give my thoughts on the current Gakken kit situation. First, I don’t think it’s a simple matter of not having any new ideas. At the back of each kit mook, there’s a survey card readers are asked to fill out. The card includes a list of potential kit ideas and the company then looks at the popularity of each idea to see which ones might have an audience. So, the company has ideas, and it kind of knows how well each one might sell.

I think the problems are:

1) Having something to base the magazine on. If the kit is part of the mook series (and not one of the higher-priced stand-alone kits), the authors need to write about the underlying principles behind the kit, the history of that type of machine, and interview a few well-known specialists in that field. Plus, they want 3-4 pages of pictures of antiques for a photo spread. If they pick a kit with nothing to write about, or they re-do something they’ve published before (like the cameras, or the steam engine), they won’t have enough material for a 40-60 page magazine.

2) The prices of the kits have been going up and the kits have gotten more elaborate. The first kit, the little putt-putt boat, had maybe 5 parts and cost 1,600 yen. The latest kits have been getting up around 20-40 parts and anywhere between 3,500 and 3,900 yen. That means the costs for producing the kits are going up, not to mention the salaries of the writers and editors. There’s kind of an unspoken ceiling that Gakken is hesitant to break through because they’re afraid of losing customers. If the kit price goes over 4,000 yen, people are going to question whether the kits are worth buying.

3) The government raised the sales tax by 2-3 percent a couple years ago to 8%. At the time, they said it was to raise money to reduce the national debt, but most of the taxes have gone to efforts to rewrite the constitution to let Japan create more of a standing army, and as donations to disaster-hit regions around the world. The result of the increased taxes, instead of jump-starting the economy like the government hoped, has been a scaling back of household spending, and the closing of many smaller boutique companies, and an increase in unemployment. Gakken has lost sales in all of this, and they’re naturally going to look at what products sell better than the others, and cut anything that isn’t selling well. This is especially a problem for customers because the government is still convinced that higher sales taxes are going to fix everything, and are insisting on bumping them another 2%, to 10%, in 2017. Companies know this, and they’re looking at moving their marketing focus from the domestic to international markets.

In summary, boneheaded sales tax decisions by the current ruling political party are killing the markets in Japan, and spending is going down (while wages aren’t changing). This, coupled with the continuous increase in kit complexity and sales price, is causing Gakken to rethink which products to keep selling in order to maintain their profits.