Every content here is my original work.Creative Commons Licence
Universitas Scripta is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.
Your cookies may be used by Google and mathjax. See Google's privacy policy.

Monday, 3 June 2024

Intersections and Proximities of p-planes.

 In \(D\)-dimensions, \(p_1\)-plane and \(p_2\)-plane has intersection of \((D-p_1-p_2)\)-plane at least or with more dimensions. If \(D-p_1-p_2 <0\), they can have 0 to \(\mathrm{min}\{p_1,p_2\}\) dimension proximity object or intersections. (i.e. no intersection guaranteed). This is because parameter of each dimension of plane gives constraint to the \(D\)-dimensional coordinate equation.

The two objects may be written by

\[\vec{x}(a) = \vec{v}_0 + \sum_{k=1}^{p_1}a_k \vec{v}_k,\]

\[\vec{y}(b) = \vec{w}_0 + \sum_{k=1}^{p_2}b_k \vec{w}_k.\]


The proximity or intersection maybe given by

\[\mathrm{min}\left|\vec{x} - \vec{y}\right|,\]

which may be represented by this extrema equation:

\[\begin{cases}
0=\frac{\partial}{\partial a_k}\left|\vec{x} - \vec{y}\right|^2 = 2\left(\vec{x} - \vec{y}\right)\cdot\frac{\partial}{\partial a_k}\left(\vec{x} - \vec{y}\right)= 2\left(\vec{x} - \vec{y}\right)\cdot\frac{\partial \vec{x}}{\partial a_k}
\\
0\frac{\partial}{\partial b_k}\left|\vec{x} - \vec{y}\right|^2 = 2\left(\vec{x} - \vec{y}\right)\cdot\frac{\partial}{\partial b_k}\left(\vec{x} - \vec{y}\right) = -2\left(\vec{x} - \vec{y}\right)\cdot\frac{\partial \vec{y}}{\partial b_k}
\end{cases}\]

\[\Rightarrow\begin{cases}
0=\left(\vec{x} - \vec{y}\right)\cdot\frac{\partial \vec{x}}{\partial a_k}=\vec{v}_k\cdot\left(\vec{x} - \vec{y}\right)
\\
0=\left(\vec{x} - \vec{y}\right)\cdot\frac{\partial \vec{y}}{\partial b_k}=\vec{w}_k\cdot\left(\vec{x} - \vec{y}\right)
\end{cases}\]

\[\Rightarrow\begin{cases}
0=\vec{v}_k\cdot\left(\vec{x} - \vec{y}\right) =\vec{v}_k \cdot\left( \vec{v}_0 - \vec{w}_0\right) + \sum_{l=1}^{p_1}\vec{v}_k\cdot\vec{v}_l a_l - \sum_{l=1}^{p_1}\vec{v}_k\cdot\vec{w}_l b_l
\\
0=-\vec{w}_k\cdot\left(\vec{x} - \vec{y}\right) =-\vec{w}_k \cdot\left( \vec{v}_0 - \vec{w}_0\right) - \sum_{l=1}^{p_2}\vec{w}_k\cdot\vec{v}_l a_l+ \sum_{l=1}^{p_2}\vec{w}_k\cdot\vec{w}_l b_l
\end{cases}\]

The equations may be re-written by

\[\begin{pmatrix}*&*&*\\\vec{v}_k\cdot\left( \vec{v}_0 - \vec{w}_0\right)&\vec{v}_k\cdot\vec{v}_l&-\vec{v}_k\cdot\vec{w}_l \\-\vec{w}_k\cdot\left( \vec{v}_0 - \vec{w}_0\right)&-\vec{w}_k\cdot\vec{v}_l&\vec{w}_k\cdot\vec{w}_l\end{pmatrix}\begin{pmatrix}1 \\ a_l \\ b_l \end{pmatrix}=\begin{pmatrix}* \\ 0 \\ 0 \end{pmatrix}\]

or

\[\begin{pmatrix}\vec{v}_k\cdot\vec{v}_l&-\vec{v}_k\cdot\vec{w}_l \\-\vec{w}_k\cdot\vec{v}_l&\vec{w}_k\cdot\vec{w}_l\end{pmatrix}\begin{pmatrix}a_l \\ b_l \end{pmatrix}=\begin{pmatrix} -\vec{v}_k\cdot\left( \vec{v}_0 - \vec{w}_0\right) \\ \vec{w}_k\cdot\left( \vec{v}_0 - \vec{w}_0\right) \end{pmatrix}\]

With convention of column vector \(\vec{v}\)'s, and 

\[{\mathbf{m}_0}_{(D \times 1)} := \vec{v}_0-\vec{w}_0 ,\quad \mathbf{M}_{(D \times (p_1 + p_2))} := \begin{pmatrix} \vec{v}_k & -\vec{w}_k \end{pmatrix},\quad \mathbf{a}_{((p_1 + p_2 )\times 1)}=\begin{pmatrix}a_k \\ b_k \end{pmatrix},\]

the equations will be re-written by

\[ (\mathbf{M}^T \mathbf{M}) \mathbf{a}= -\mathbf{M}^T \mathbf{m}_0. \]

So, the solution of parameters is

\[\mathbf{a}= -(\mathbf{M}^T \mathbf{M})^{-1} \mathbf{M}^T \mathbf{m}_0\]

unless \(\mathbf{M}^T \mathbf{M}\) is singular.


If singular, to separate singular (freed) and non-singular (constraint) components, we need to diagonalize it. We may try SVD to avoid catastrophic result.

\[(\mathbf{M}^T \mathbf{M}) =: \mathbf{U}\mathbf{\Sigma} \mathbf{V}^T\]

where \(U\) and \(V\) are orthogonal (Hermitian) and \(\Sigma\) is a nonnegative diagonal matrix.

The parameters will be rotated by the orthogonal matrices to diagonalize the matrix to see the free variable.

\[\mathbf{U}\mathbf{\Sigma} \mathbf{V}^T\mathbf{a}= - \mathbf{M}^T \mathbf{m}_0 \;\Rightarrow \;\mathbf{\Sigma} \mathbf{V}^T\mathbf{a}= -\mathbf{U}^T \mathbf{M}^T \mathbf{m}_0=: \begin{pmatrix}m_r \\ 0\end{pmatrix}\]

where in the rows with the zero values in \(\Sigma\), the vector component is also zeros, so the solution for that row is indefinite. We know indefinite because it is zero divide by zero.

\[ \mathbf{V}^T\mathbf{a}=\mathbf{\Sigma}^{-1} \begin{pmatrix}m_r \\ z\end{pmatrix} = \begin{pmatrix}\Sigma_r^{-1} m_r \\ 0^{-1} \cdot 0\end{pmatrix}\].

Now, substitute the indefinite numbers to free parameters \(z_i\).

\[ \mathbf{V}^T\mathbf{a}= \begin{pmatrix}\Sigma_r^{-1} m_r \\ z_i\end{pmatrix}\;\Rightarrow \;\mathbf{a}= \mathbf{V}\begin{pmatrix}\Sigma_r^{-1} m_r \\ z_i\end{pmatrix}.\]


Also, we can find the shortest distance using the equation and the solution. The squared distance is

\[\left|\vec{x} - \vec{y}\right|^2 = \left| \mathbf{m}_0 + \mathbf{M} \mathbf{a}\right|^2 = \mathbf{m}_0^T \mathbf{m}_0 +2 \mathbf{a}^T \mathbf{M}^T \mathbf{m}_0  +\mathbf{a}^T \mathbf{M}^T\mathbf{M}\mathbf{a} \]

\[= \mathbf{m}_0^T \mathbf{m}_0 + \mathbf{a}^T \mathbf{M}^T \mathbf{m}_0  = \mathbf{m}_0^T \left(\mathbf{I} - \mathbf{M}(\mathbf{M}^T\mathbf{M})^{-1} \mathbf{M}^T\right)\mathbf{m}_0\]

Saturday, 25 May 2024

Point cloud Similarities

 Point to distribution:

\(\vec{p}=(p_x, p_y) \Rightarrow P(x,y)= \frac{1}{2\pi \sigma_x \sigma_y} \exp\left\{-\frac{x^2}{2\sigma_x^2}-\frac{y^2}{2\sigma_y^2}\right\}\)

\(\vec{p}=\sum_{i=1}^D p^i \hat{e}_i  \Rightarrow P(x^i)= (2\pi)^{-D/2}\left(\prod_{i=1}^D \sigma_{x^i}^{-1}\right) \exp\left\{-\sum_{i=1}^D \frac{{(x^i)}^2}{2{(\sigma_{x^i})}^2}\right\}\)

Correlation Function for between one point distributions:

\(P(x^i; x_0^i) := P(x^i - x_0^i)\)

\(\left<P(x^i)P(x^i; x_0^i)\right> = {(4\pi)}^{-D/2} \left(\prod_{i=1}^D \sigma_{x^i}^{-1}\right) \exp\left\{-\sum_{i=1}^D \frac{{(x_0^i)}^2}{4{(\sigma_{x^i})}^2}\right\} \)

For two point clouds \(\{\vec{p}_i\}\) and \(\{\vec{q}_i\}\),

\(P(x^i;\{\vec{p}_j\}) = \sum_j P(x^i; p_j^i)\)

The correlation of two point clouds is

\(C(\{\vec{p}_j\}, \{\vec{q}_j\}) = \frac{\left<P(x^i;\{\vec{p}_j\})P(x^i;\{\vec{q}_j\}) \right>}{\sqrt{\left<P(x^i;\{\vec{p}_j\}) P(x^i;\{\vec{p}_j\}) \right>\left<P(x^i;\{\vec{q}_j\}) P(x^i;\{\vec{q}_j\}) \right>}}\)

We want to write \(\left<P(x^i;\{\vec{p}_j\}) P(x^i;\{\vec{q}_j\}) \right>\) with \(P(x^i;\{\vec{p}_j\}) = \sum_j P(x^i, p_j^i)\).

\(C'(\{\vec{p}_j\}, \{\vec{q}_j\}):=\left<P(x^i;\{\vec{p}_j\}) P(x^i;\{\vec{q}_j\}) \right>\)
\(\qquad \qquad\qquad = \left< \sum_{j,k} P(x^i; p_j^i) P(x^i; q_k^i) \right>\)
\(\qquad \qquad\qquad \sim\sum_{j,k} \exp\left\{-\sum_{i=1}^D \frac{{(p_j^i - q_k^i)}^2}{4{(\sigma_{x^i})}^2}\right\} \)

To maximize \(C'(\{\vec{p}_j\}, \{\vec{q}_j\})\) by shift of \(q\), we need to \(\frac{d}{d\Delta q^i}C'(\{\vec{p}_j\}, \{\vec{q}_j + \Delta\vec{q}\})\).

\[\frac{d}{d\Delta q^i}C'(\{\vec{p}_j\}, \{\vec{q}_j + \Delta\vec{q}\}) \sim \left. \frac{d}{d\Delta q^i}\sum_{j,k} \exp\left\{-\sum_{i=1}^D \frac{{(p_j^i - q_k^i - \Delta q^i)}^2}{4{(\sigma_{x^i})}^2}\right\} \right|_{\Delta q = 0} \]
\[=\left. -\frac{\sigma_{x^i}^2}{2}\sum_{j,k}(p_j^i - q_k^i - \Delta q^i) \exp\left\{-\sum_{i=1}^D \frac{{(p_j^i - q_k^i - \Delta q^i)}^2}{4{(\sigma_{x^i})}^2}\right\}\right|_{\Delta q = 0} \]
\[= -\frac{\sigma_{x^i}^2}{2}\sum_{j,k}(p_j^i - q_k^i) \exp\left\{-\sum_{i=1}^D \frac{{(p_j^i - q_k^i )}^2}{4{(\sigma_{x^i})}^2}\right\} \]

In the same sense,

\(
\left.\frac{d^2}{{(d\Delta q^i)}^2}C'(\{\vec{p}_j\}, \{\vec{q}_j + \Delta\vec{q}\})\right|_{\Delta q = 0} \)
\[\sim
 \sum_{j,k}\left[\frac{\sigma_{x^i}^4 (p_j^i - q_k^i)^2}{4} + \frac{\sigma_{x^i}^2}{2}\right] \exp\left\{-\sum_{i=1}^D \frac{{(p_j^i - q_k^i )}^2}{4{(\sigma_{x^i})}^2}\right\}
\]

We can do Newton's method using these 1st and 2nd derivatives.

Saturday, 18 May 2024

Interpolation function training

Ideal interpolation kernel is sinc function, which consists of all frequencies upto Nyquist frequency. However, this involves infinite-dimension convolution in discrete calculation, so we apply window function. In signal procession, thus, interpolation kernel is windowed-sinc function [https://www.analog.com/media/en/technical-documentation/dsp-book/dsp_book_Ch16.pdf], and the window function can vary to achieve (1) to narrow the window width to reduce calculation, (2) and in the same time to increase accuracy and reduce artifacts. We have to satisfy these two contradictory conditions, so we have to compromise and optimise conditions.

    The kernel generally must satisfy f(0) = 1 and f(n != 0 but integer) = 0 and additionally I force to satisfy to be continuous function, to avoid catastrophic artifacts. Generic window function may be written in Fourier series, (cosine if symmetric) [https://en.wikipedia.org/wiki/Window_function] 

\[ \displaystyle
w(t)=
\begin{cases}
 \sum_{k=0}^{N}a_k \cos k \pi t  + \sum_{k=1}^{N} b_k \sin k\pi t\quad\text{if  \(t\in[-1,1]\)}
\\
0\quad\text{else}
\end{cases} \]

with absolute constraints

\[\textbf{Constraint 1: }\quad w(0) = \sum_{k=0}^{N} a_k = 1,\]

\[\textbf{Constraint 2: }\quad w(1) =\sum_{k=0}^{N}(-1)^k a_k = 0, \]

assuming the window width to be [-1, 1] (to use in practice, scale this in \(t\)-direction) .


Projection of the window parameters \(\{a_k, b_k\}\) to the constraints.

    From \(\{a_k, b_k\}\) vector space, we have two constraints to absolutely be satisfied. Analogously to this situation: we must project a point to two planes simultaneosly, thus making it on the line, the intersection of the two planes; The projection must move perperdicular to either plane. We can develop a training process only moving on the constraints, but because of calculation errors, we have to develop a way to project on the constraints anyway. The training process to optimise the kernel to the given big data of audio may correspond to the loss optimization process of this \((2N+1)\)-dimensional point moving to the minima or maxima.

    Let us concretise the projection idea more. The \((2N+1)\)-dimensional point always satisfy 2 constraints above, so the valid space satisfying the constraints is \((2N-1)\) dimensions. In the quotient space, the kernel corresponding to a certain point must be \(2\)-dimensional, which passes the given point and is parallel to the two perpendicular vectors of the two constraint \(2N\)-plane. The intersection between the proper \((2N-1)\) dimensions and the 2D-kernel is the projected point we want.

    The two perpendicular vectors is the same as the coefficients of the equations of the planes, so the 2D kernel including a point \(\{a'_k, b'_k\}\) is written by 

\[ \{a_k, b_k\}  = \{a'_k, b'_k\} + \{1, 0\} x + \{(-1)^k, 0\} y \]

To obtain the intersection of this kernel to the two constraints \(w(0), w(1)\), we may substitute the  \(\{a_k, b_k\}\) expression of the kernel to the equation of each constraint.

\[ w(0) = \sum_{k=0}^{N} (a'_k + x + (-1)^k y) = 1 \]

\[ w(1) = \sum_{k=0}^{N} (-1)^k(a'_k + x + (-1)^k y) = 0 \]

From these equations, we have to obtain \(x\) and \(y\) to obtain how much the intersection have moved inside the kernel space from the original point. Simplified,

\[ w(0) =(N+1) x + \frac{1 + (-1)^N}{2}y + \sum_{k=0}^{N} a'_k  = 1 \]

\[ w(1) = \frac{1 + (-1)^N}{2} x + (N+1)y + \sum_{k=0}^{N} (-1)^k a'_k = 0 \]

More simplified,

\[
\begin{pmatrix}
(N+1) & \frac{1 + (-1)^N}{2}
\\ \frac{1 + (-1)^N}{2} & (N+1)
\end{pmatrix}
\begin{pmatrix} x\\y \end{pmatrix}
=
\begin{pmatrix}
1-\sum_{k=0}^{N} a'_k 
\\
-\sum_{k=0}^{N} (-1)^k a'_k 
\end{pmatrix}
\]

\[
\Rightarrow
\begin{pmatrix} x\\y \end{pmatrix}
=
\frac{1}{(N+1)^2 - \frac{1 + (-1)^N}{2}}
\begin{pmatrix}
(N+1) & -\frac{1 + (-1)^N}{2}
\\ -\frac{1 + (-1)^N}{2} & (N+1)
\end{pmatrix}
\begin{pmatrix}
1-\sum_{k=0}^{N} a'_k 
\\
-\sum_{k=0}^{N} (-1)^k a'_k 
\end{pmatrix}
\]

\[\Rightarrow
\begin{pmatrix} x+y\\x-y \end{pmatrix}=
\dfrac{
  (N+1) -\frac{1+(-1)^N}{2} \mathrm{diag}\{1, -1\}
}{(N+1)^2 - \frac{1+(-1)^N}{2}}
\begin{pmatrix}
1-\sum_{k=0}^{N} a'_k -\sum_{k=0}^{N} (-1)^k a'_k 
\\
1-\sum_{k=0}^{N} a'_k +\sum_{k=0}^{N} (-1)^k a'_k 
\end{pmatrix}
\]

\[\Rightarrow
\begin{pmatrix} x+y\\x-y \end{pmatrix}=
\dfrac{
  (N+1) -\frac{1+(-1)^N}{2} \mathrm{diag}\{1, -1\}
}{(N+1)^2 - \frac{1+(-1)^N}{2}}
\begin{pmatrix}
1-2\sum_{k=0}^{N} \frac{1 + (-1)^k}{2} a'_k 
\\
1-2\sum_{k=0}^{N} \frac{1 + (-1)^{k+1}}{2} a'_k 
\end{pmatrix}
\]

Thus, the projected point \(\{a_k, b_k\}\) from the original point \(\{a'_k, b'_k\}\) is

\[
\begin{cases}
a_k = a'_k + x + (-1)^k y,\quad k\in[0,N],
\\
b_k = b'_k,\quad k\in[1,N].
\end{cases}
\]

\[
\displaystyle
\Rightarrow
\begin{cases}
a_k = a'_k +
\dfrac{
  (N+1) -\frac{1+(-1)^N}{2} (-1)^k
}{(N+1)^2 - \frac{1+(-1)^N}{2}}
\left(
1 -2\sum_{m=0}^{N}\frac{1+ (-1)^{k+m}}{2} a'_m 
\right)
\\
b_k = b'_k,\quad k\in[1,N].
\end{cases}
\]

    Thus, here we may define projection operator \(P\) s.t.

\[
\displaystyle
\begin{cases}
P(a_k) := a_k +
\dfrac{
  (N+1) -\frac{1+(-1)^N}{2} (-1)^k
}{(N+1)^2 - \frac{1+(-1)^N}{2}}
\left(
1 -2\sum_{m=0}^{N}\frac{1+ (-1)^{k+m}}{2} a_m 
\right)
\\
P(b_k) := b_k,\quad k\in[1,N].
\end{cases}
\]


Gradient vector projection to the constraints

Now, as doing projection each iteration of training is inefficient, we want to develop to project each step only on the constraints.

    From the original valid point \(\{a^0_k, b^0_k\}\) s.t. \(P(a^0_k\) = a^0_k, we want to consider new train step \(\{a^0_k + \Delta a_k, b^0_k + \Delta b_k\}\) where \(\Delta a_k, \Delta b_k\) may not be determined with constraints but only gradient of loss. Then, we must project \(P(a^0_k + \Delta a_k)\) to finalize the step.

\[
\small
\begin{aligned}
P(a^0_k+\Delta a_k) &= 
a^0_k +\Delta a_k+
\dfrac{
  (N+1) -\frac{1+(-1)^N}{2} (-1)^k
}{(N+1)^2 - \frac{1+(-1)^N}{2}}
\left(
1 -2\sum_{m=0}^{N}\frac{1+ (-1)^{k+m}}{2} (a^0_m + \Delta a_m)
\right)
\\ &= P(a^0_k) + \Delta a_k
-2
\dfrac{
  (N+1) -\frac{1+(-1)^N}{2} (-1)^k
}{(N+1)^2 - \frac{1+(-1)^N}{2}}
\sum_{m=0}^{N}\frac{1+ (-1)^{k+m}}{2} \Delta a_m
\end{aligned}
\]

    Thus, here we define the new projection operator \(\Delta P\) for the differentiation vector of the coefficient point:

\[
\Delta P(\Delta a_k) :=  \Delta a_k
-2
\dfrac{
  (N+1) -\frac{1+(-1)^N}{2} (-1)^k
}{(N+1)^2 - \frac{1+(-1)^N}{2}}
\sum_{m=0}^{N}\frac{1+ (-1)^{k+m}}{2} \Delta a_m
\]


Pseudocodes

\(P\) and \(\Delta P\) can be written in PyTorch codes:

Sunday, 29 January 2023

심리검사의 허구성

심리검사를 도입하는 회사들에서는 개인에 대한 이해없이 성향을 카테고리화하고 사람을 판단하기 때문에 오히려 심리검사에 의한 오판을 많이 일으킬 것이다.

예를 들어 나는 매사를 계획하기보다는 직관에 많이 의존하는 편이다. 동시에 나는 완벽주의적이어서 완벽하지 않으면 포기하는 성향이 있는가 하면 계획을 세울때도 plan B, plan C는 기본에 어떻게든 내가 세운 계획대로 수행하기 위해 무리하기를 불사한다. 

만약 심리검사를 한다면 직관형으로 나올것이고, 회사에서 내가 완벽주의적이라고 한다면 거짓말이라고 판단하고 부정적인 평가를 할것이다. 인간의 다면성을 부정한 결과이다. 결국 이러한 성질을 잘 살릴수 있는 직무를 찾기보다는 합격을 위해서 거짓말을 해야하는 상황으로 내몰리는 것이다.

따라서 보통 우리가 MBTI를 사이비 취급 하지만, 심리 치료를 목적으로 하지 않는, 하지만 심리학 전공자들에 의해 만들어진 대부분의 심리검사들 또한 본질적으로 MBTI와 같이 사람을 잘못된 카테고리화 하는 측면이 있다.

Tuesday, 24 January 2023

MacBook Battery Bug

 When the macbook pro optimised battery charging is on and plugged in for days, battery seems to be charged secretly to 100% sometimes and jumps from 80% to 100% at the next boot up. Is this only to me?

Friday, 7 October 2022

로또 당첨금 기대값 계산

 

등위당첨방법당첨확률당첨금의 배분 비율
1등6개 번호 일치1 / 8,145,060총 당첨금 중 4등, 5등 금액을 제외한 금액의 75%
2등5개 번호 일치
+ 보너스 번호일치
1 / 1,357,510총 당첨금 중 4등, 5등 금액을 제외한 금액의 12.5%
3등5개 번호 일치19 / 678,755
≈ 1 / 35,724
총 당첨금 중 4등, 5등 금액을 제외한 금액의 12.5%
4등4개 번호 일치741 / 543,004
≈ 1 / 733
50,000원
5등3개 번호 일치9139 / 407,253
≈ 1 / 45
5,000원

1등: 19억 5216만원
2등: 5422만 6666원 67전
3등: 142만 7017원 54전

Friday, 2 September 2022

Lightning to usb3 camera adapter review

 시중에 이미 많은 usb 킷들이 나와있지만 그래도 애플 정품은 전원부가 다를까 싶어서 구입해보았다. 예상 외의 부분들이 있었다.

우선 일반적인 써드파티 꼬다리보다는 확실히 전원부 품질은 좋았다. 중국산 lightning male to usb c female을 써서 dac를 연결했을때는 노이즈가 심해서 원래 dac품질의 문제로 생각했었다. 그런데 애플정품 젠더를 이용하니 노이즈 문제는 확실히 감소했다. 외부 전원 공급으로 인한 품질 향상을 기대했으나 폰 자체 전원만으로도 이런 차이가 있었다.

문제는 외부전원에 있었다. 외부전원공급시에 내가 기대한것은 전원 품질 향상과 용량확충이었다. 둘 다 문제가 있었다. 우선 외부전원을 연결하고 dac를 연결하니 한쪽 귀에서 노이즈가 들리고 전기가 튀기 시작했다. 아뿔싸. 역시 그라운드 루프 문제는 해결하지 못했구나... 특수장치를 쓰지 않는 이상 일반적인 경우는 다들 마찬가지일 것이다. 충전기에서는 flyback converter를 통해 허수전압이 연결된 chassis ground를 쓸 것이고, 이 전압이 폰 내부 gnd랑 달라서 전기가 흐른다. 도저히 귀가 아파서 쓸수가 없었다.

dac꼬다리 뿐 아니라, usb용 소형 거치형 dac도 연결해 보았으나, 전원이 아예 켜지지 않았다. 이것도 예상외였다. 보통 usb1은 100mA, usb2는 500mA, usb 3은 900mA의 전류공급을 보장한다. 하지만 lightning에서 audio accessory로 출력할수 있는 최대전류는 50mA, 순간최대 100mA로 규정되어 있으므로 (애플 lightning accessory guideline참조) 전류가 턱없이 부족하다. usb3 kit를 산 이유도 이런 전류 부족을 해결하여 외부기기도 사용할수있게 하기 위함도 있었는데, usb기기의 전원도 안켜진다는것은 아마 십중팔구 전류부족일것이다.

역시 외부 ssd도 연결이 불가능했다.

따라서 다른분들에게는 lightning to usb2 kit을 추천한다. 이미 대부분의 기기들이 이것을 기준으로 만들어져있어서 연결성도 좋다. usb3 kit은 전원연결부 때문에 usb-a-male port에 연결할수도 없다. (ifi portable dac 같은 제품들) 

애플 자체 펌웨어 업데이트가 지원된다는것은 좋지만, 실질적인 효용성이 뭔지는 모르겠다.

미안하지만, 태클은 사절이다. 당신 분야에 대해서 당신이 아는것과 다른 단어를 써서 화가 날수도 있겠지만, 나는 아는 것이 많아서 뭉뚱그려서 두리뭉실하게 말하는거니까. 별로 틀린말 하는것도 아니고 당신 전문분야의 용어를 안쓴다고 뭐라고 하지는 말기를